Test Report: Docker_Linux_crio_arm64 17857

                    
                      6e3ba89264b64b7b6259573ef051dd85e83461cf:2023-12-26:32448
                    
                

Test fail (11/315)

x
+
TestAddons/parallel/Ingress (484.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-154736 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-154736 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-154736 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [89cc57b4-3f60-4a69-b7d3-dbc25226b9c0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:250: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:250: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-154736 -n addons-154736
addons_test.go:250: TestAddons/parallel/Ingress: showing logs for failed pods as of 2023-12-26 21:56:59.622298452 +0000 UTC m=+740.275273307
addons_test.go:250: (dbg) Run:  kubectl --context addons-154736 describe po nginx -n default
addons_test.go:250: (dbg) kubectl --context addons-154736 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-154736/192.168.49.2
Start Time:       Tue, 26 Dec 2023 21:48:59 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mttn4 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-mttn4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m                     default-scheduler  Successfully assigned default/nginx to addons-154736
Warning  Failed     7m29s                  kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     4m27s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    3m37s (x4 over 8m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     3m7s (x4 over 7m29s)   kubelet            Error: ErrImagePull
Warning  Failed     3m7s (x2 over 5m59s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    2m42s (x7 over 7m29s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m42s (x7 over 7m29s)  kubelet            Error: ImagePullBackOff
addons_test.go:250: (dbg) Run:  kubectl --context addons-154736 logs nginx -n default
addons_test.go:250: (dbg) Non-zero exit: kubectl --context addons-154736 logs nginx -n default: exit status 1 (122.229831ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:250: kubectl --context addons-154736 logs nginx -n default: exit status 1
addons_test.go:251: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-154736
helpers_test.go:235: (dbg) docker inspect addons-154736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94",
	        "Created": "2023-12-26T21:45:41.806387804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 704120,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T21:45:42.123091502Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/hostname",
	        "HostsPath": "/var/lib/docker/containers/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/hosts",
	        "LogPath": "/var/lib/docker/containers/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94-json.log",
	        "Name": "/addons-154736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-154736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-154736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0eaaf8543365e6297970bf5096d74b7af77ea75fc0bb6e681d7f593d9e01e51-init/diff:/var/lib/docker/overlay2/45396a29879cab7c8a67d68e40c59b67c1c0ba964e9ed87a152af8cc5862c477/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0eaaf8543365e6297970bf5096d74b7af77ea75fc0bb6e681d7f593d9e01e51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0eaaf8543365e6297970bf5096d74b7af77ea75fc0bb6e681d7f593d9e01e51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0eaaf8543365e6297970bf5096d74b7af77ea75fc0bb6e681d7f593d9e01e51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-154736",
	                "Source": "/var/lib/docker/volumes/addons-154736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-154736",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-154736",
	                "name.minikube.sigs.k8s.io": "addons-154736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2c290e95bcf18514e9253c173e0261fcd2cebaf9efe8ca6024d46b1bc1ba866a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33671"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33667"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33669"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33668"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2c290e95bcf1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-154736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0927c77a91cb",
	                        "addons-154736"
	                    ],
	                    "NetworkID": "0ce741a8f930f44069a6bdf9f4ed33b0b28aabc7b6040abdd1f84433f7a93e9c",
	                    "EndpointID": "0c120efe77a5545a5dd5f788310b2f79bca21a0517ac182d7e7a20aa1f26e532",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-154736 -n addons-154736
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-154736 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-154736 logs -n 25: (1.709737425s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |                     |
	|         | -p download-only-988176                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |                     |
	|         | -p download-only-988176                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | -p download-only-988176                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| delete  | -p download-only-988176                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| delete  | -p download-only-988176                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| start   | --download-only -p                                                                          | download-docker-374836 | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | download-docker-374836                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-374836                                                                   | download-docker-374836 | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-438777   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | binary-mirror-438777                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45525                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-438777                                                                     | binary-mirror-438777   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| addons  | enable dashboard -p                                                                         | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | addons-154736                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | addons-154736                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-154736 --wait=true                                                                | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:48 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | -p addons-154736                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-154736 ip                                                                            | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	| addons  | addons-154736 addons disable                                                                | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | -p addons-154736                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-154736 ssh cat                                                                       | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | /opt/local-path-provisioner/pvc-e94447a0-cc9f-4ee2-b024-1e95c001aae0_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-154736 addons disable                                                                | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | addons-154736                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | addons-154736                                                                               |                        |         |         |                     |                     |
	| addons  | addons-154736 addons                                                                        | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:45:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:45:17.357121  703653 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:45:17.357260  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:45:17.357268  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:45:17.357273  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:45:17.357532  703653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 21:45:17.358030  703653 out.go:303] Setting JSON to false
	I1226 21:45:17.358813  703653 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19651,"bootTime":1703607466,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 21:45:17.358889  703653 start.go:138] virtualization:  
	I1226 21:45:17.361650  703653 out.go:177] * [addons-154736] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 21:45:17.364230  703653 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 21:45:17.365978  703653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:45:17.364366  703653 notify.go:220] Checking for updates...
	I1226 21:45:17.369777  703653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 21:45:17.371642  703653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 21:45:17.373457  703653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 21:45:17.375253  703653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 21:45:17.377723  703653 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:45:17.401923  703653 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:45:17.402036  703653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:45:17.480034  703653 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-26 21:45:17.470030553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:45:17.480160  703653 docker.go:295] overlay module found
	I1226 21:45:17.482441  703653 out.go:177] * Using the docker driver based on user configuration
	I1226 21:45:17.484480  703653 start.go:298] selected driver: docker
	I1226 21:45:17.484501  703653 start.go:902] validating driver "docker" against <nil>
	I1226 21:45:17.484556  703653 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 21:45:17.485187  703653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:45:17.559712  703653 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-26 21:45:17.550602015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:45:17.559868  703653 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 21:45:17.560121  703653 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 21:45:17.562089  703653 out.go:177] * Using Docker driver with root privileges
	I1226 21:45:17.564061  703653 cni.go:84] Creating CNI manager for ""
	I1226 21:45:17.564086  703653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:45:17.564098  703653 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 21:45:17.564115  703653 start_flags.go:323] config:
	{Name:addons-154736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-154736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:45:17.566567  703653 out.go:177] * Starting control plane node addons-154736 in cluster addons-154736
	I1226 21:45:17.568294  703653 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 21:45:17.570474  703653 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 21:45:17.572562  703653 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:45:17.572616  703653 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1226 21:45:17.572637  703653 cache.go:56] Caching tarball of preloaded images
	I1226 21:45:17.572720  703653 preload.go:174] Found /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1226 21:45:17.572729  703653 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1226 21:45:17.573075  703653 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/config.json ...
	I1226 21:45:17.573094  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/config.json: {Name:mk543582001de673a7ac0933815d446a06676405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:17.573254  703653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 21:45:17.589840  703653 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:45:17.589980  703653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 21:45:17.590005  703653 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I1226 21:45:17.590014  703653 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I1226 21:45:17.590022  703653 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I1226 21:45:17.590027  703653 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from local cache
	I1226 21:45:33.563146  703653 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from cached tarball
	I1226 21:45:33.563189  703653 cache.go:194] Successfully downloaded all kic artifacts
	I1226 21:45:33.563259  703653 start.go:365] acquiring machines lock for addons-154736: {Name:mk2d6ec3bfe0e7c6048525ebd8a1df5b118807f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 21:45:33.563390  703653 start.go:369] acquired machines lock for "addons-154736" in 102.562µs
	I1226 21:45:33.563421  703653 start.go:93] Provisioning new machine with config: &{Name:addons-154736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-154736 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 21:45:33.563501  703653 start.go:125] createHost starting for "" (driver="docker")
	I1226 21:45:33.565694  703653 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1226 21:45:33.565980  703653 start.go:159] libmachine.API.Create for "addons-154736" (driver="docker")
	I1226 21:45:33.566014  703653 client.go:168] LocalClient.Create starting
	I1226 21:45:33.566134  703653 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem
	I1226 21:45:34.659916  703653 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem
	I1226 21:45:35.203057  703653 cli_runner.go:164] Run: docker network inspect addons-154736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 21:45:35.223431  703653 cli_runner.go:211] docker network inspect addons-154736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 21:45:35.223524  703653 network_create.go:281] running [docker network inspect addons-154736] to gather additional debugging logs...
	I1226 21:45:35.223560  703653 cli_runner.go:164] Run: docker network inspect addons-154736
	W1226 21:45:35.241707  703653 cli_runner.go:211] docker network inspect addons-154736 returned with exit code 1
	I1226 21:45:35.241741  703653 network_create.go:284] error running [docker network inspect addons-154736]: docker network inspect addons-154736: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-154736 not found
	I1226 21:45:35.241753  703653 network_create.go:286] output of [docker network inspect addons-154736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-154736 not found
	
	** /stderr **
	I1226 21:45:35.241866  703653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 21:45:35.259937  703653 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024e5ac0}
	I1226 21:45:35.259975  703653 network_create.go:124] attempt to create docker network addons-154736 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1226 21:45:35.260042  703653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-154736 addons-154736
	I1226 21:45:35.335366  703653 network_create.go:108] docker network addons-154736 192.168.49.0/24 created
	I1226 21:45:35.335400  703653 kic.go:121] calculated static IP "192.168.49.2" for the "addons-154736" container
	I1226 21:45:35.335480  703653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 21:45:35.352573  703653 cli_runner.go:164] Run: docker volume create addons-154736 --label name.minikube.sigs.k8s.io=addons-154736 --label created_by.minikube.sigs.k8s.io=true
	I1226 21:45:35.370827  703653 oci.go:103] Successfully created a docker volume addons-154736
	I1226 21:45:35.370918  703653 cli_runner.go:164] Run: docker run --rm --name addons-154736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-154736 --entrypoint /usr/bin/test -v addons-154736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 21:45:37.529102  703653 cli_runner.go:217] Completed: docker run --rm --name addons-154736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-154736 --entrypoint /usr/bin/test -v addons-154736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (2.158128411s)
	I1226 21:45:37.529133  703653 oci.go:107] Successfully prepared a docker volume addons-154736
	I1226 21:45:37.529167  703653 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:45:37.529191  703653 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 21:45:37.529267  703653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-154736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 21:45:41.723596  703653 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-154736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.194288032s)
	I1226 21:45:41.723631  703653 kic.go:203] duration metric: took 4.194436 seconds to extract preloaded images to volume
	W1226 21:45:41.723767  703653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 21:45:41.723910  703653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 21:45:41.790451  703653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-154736 --name addons-154736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-154736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-154736 --network addons-154736 --ip 192.168.49.2 --volume addons-154736:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 21:45:42.139875  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Running}}
	I1226 21:45:42.180243  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:45:42.207149  703653 cli_runner.go:164] Run: docker exec addons-154736 stat /var/lib/dpkg/alternatives/iptables
	I1226 21:45:42.298703  703653 oci.go:144] the created container "addons-154736" has a running status.
	I1226 21:45:42.298732  703653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa...
	I1226 21:45:43.584121  703653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 21:45:43.607127  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:45:43.625334  703653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 21:45:43.625358  703653 kic_runner.go:114] Args: [docker exec --privileged addons-154736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 21:45:43.682322  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:45:43.701947  703653 machine.go:88] provisioning docker machine ...
	I1226 21:45:43.701980  703653 ubuntu.go:169] provisioning hostname "addons-154736"
	I1226 21:45:43.702051  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:43.727271  703653 main.go:141] libmachine: Using SSH client type: native
	I1226 21:45:43.727709  703653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33671 <nil> <nil>}
	I1226 21:45:43.727730  703653 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-154736 && echo "addons-154736" | sudo tee /etc/hostname
	I1226 21:45:43.887415  703653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-154736
	
	I1226 21:45:43.887494  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:43.907302  703653 main.go:141] libmachine: Using SSH client type: native
	I1226 21:45:43.907710  703653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33671 <nil> <nil>}
	I1226 21:45:43.907728  703653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-154736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-154736/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-154736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 21:45:44.045974  703653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 21:45:44.046002  703653 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 21:45:44.046068  703653 ubuntu.go:177] setting up certificates
	I1226 21:45:44.046078  703653 provision.go:83] configureAuth start
	I1226 21:45:44.046159  703653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-154736
	I1226 21:45:44.065356  703653 provision.go:138] copyHostCerts
	I1226 21:45:44.065455  703653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 21:45:44.065605  703653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 21:45:44.065670  703653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 21:45:44.065751  703653 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.addons-154736 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-154736]
	I1226 21:45:44.682633  703653 provision.go:172] copyRemoteCerts
	I1226 21:45:44.682703  703653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 21:45:44.682742  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:44.700544  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:44.799116  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1226 21:45:44.827540  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 21:45:44.855809  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 21:45:44.884433  703653 provision.go:86] duration metric: configureAuth took 838.312591ms
	I1226 21:45:44.884503  703653 ubuntu.go:193] setting minikube options for container-runtime
	I1226 21:45:44.884750  703653 config.go:182] Loaded profile config "addons-154736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:45:44.884864  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:44.902598  703653 main.go:141] libmachine: Using SSH client type: native
	I1226 21:45:44.903006  703653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33671 <nil> <nil>}
	I1226 21:45:44.903028  703653 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 21:45:45.207361  703653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 21:45:45.207438  703653 machine.go:91] provisioned docker machine in 1.505466578s
	I1226 21:45:45.207468  703653 client.go:171] LocalClient.Create took 11.641443787s
	I1226 21:45:45.207516  703653 start.go:167] duration metric: libmachine.API.Create for "addons-154736" took 11.641516072s
	I1226 21:45:45.207545  703653 start.go:300] post-start starting for "addons-154736" (driver="docker")
	I1226 21:45:45.207576  703653 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 21:45:45.207691  703653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 21:45:45.207766  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:45.239347  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:45.349897  703653 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 21:45:45.355640  703653 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 21:45:45.355681  703653 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 21:45:45.355694  703653 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 21:45:45.355701  703653 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 21:45:45.355711  703653 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 21:45:45.355790  703653 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 21:45:45.355834  703653 start.go:303] post-start completed in 148.256524ms
	I1226 21:45:45.356161  703653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-154736
	I1226 21:45:45.377553  703653 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/config.json ...
	I1226 21:45:45.377843  703653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 21:45:45.377893  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:45.397872  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:45.494484  703653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 21:45:45.500291  703653 start.go:128] duration metric: createHost completed in 11.93677223s
	I1226 21:45:45.500316  703653 start.go:83] releasing machines lock for "addons-154736", held for 11.93691191s
	I1226 21:45:45.500400  703653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-154736
	I1226 21:45:45.518617  703653 ssh_runner.go:195] Run: cat /version.json
	I1226 21:45:45.518632  703653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 21:45:45.518672  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:45.518688  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:45.540730  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:45.541414  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:45.773361  703653 ssh_runner.go:195] Run: systemctl --version
	I1226 21:45:45.779480  703653 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 21:45:45.928374  703653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 21:45:45.934285  703653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 21:45:45.960009  703653 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 21:45:45.960108  703653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 21:45:46.007940  703653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 21:45:46.008022  703653 start.go:475] detecting cgroup driver to use...
	I1226 21:45:46.008095  703653 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 21:45:46.008197  703653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 21:45:46.027491  703653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 21:45:46.041920  703653 docker.go:203] disabling cri-docker service (if available) ...
	I1226 21:45:46.042015  703653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 21:45:46.058996  703653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 21:45:46.076168  703653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 21:45:46.176076  703653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 21:45:46.283099  703653 docker.go:219] disabling docker service ...
	I1226 21:45:46.283188  703653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 21:45:46.304709  703653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 21:45:46.318373  703653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 21:45:46.415364  703653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 21:45:46.525204  703653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 21:45:46.538401  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 21:45:46.558845  703653 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1226 21:45:46.558912  703653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:46.570719  703653 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 21:45:46.570843  703653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:46.582868  703653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:46.594679  703653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:46.607554  703653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 21:45:46.618987  703653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 21:45:46.629441  703653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 21:45:46.640009  703653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 21:45:46.737436  703653 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 21:45:46.868766  703653 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 21:45:46.868855  703653 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 21:45:46.873648  703653 start.go:543] Will wait 60s for crictl version
	I1226 21:45:46.873714  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:45:46.878246  703653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 21:45:46.923846  703653 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 21:45:46.923968  703653 ssh_runner.go:195] Run: crio --version
	I1226 21:45:46.972287  703653 ssh_runner.go:195] Run: crio --version
	I1226 21:45:47.024056  703653 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1226 21:45:47.026175  703653 cli_runner.go:164] Run: docker network inspect addons-154736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 21:45:47.043826  703653 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1226 21:45:47.048434  703653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:45:47.062328  703653 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:45:47.062400  703653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 21:45:47.129771  703653 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 21:45:47.129798  703653 crio.go:415] Images already preloaded, skipping extraction
	I1226 21:45:47.129855  703653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 21:45:47.170543  703653 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 21:45:47.170567  703653 cache_images.go:84] Images are preloaded, skipping loading
	I1226 21:45:47.170642  703653 ssh_runner.go:195] Run: crio config
	I1226 21:45:47.225638  703653 cni.go:84] Creating CNI manager for ""
	I1226 21:45:47.225661  703653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:45:47.225693  703653 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 21:45:47.225714  703653 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-154736 NodeName:addons-154736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 21:45:47.225856  703653 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-154736"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 21:45:47.225917  703653 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-154736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-154736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 21:45:47.225985  703653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 21:45:47.237056  703653 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 21:45:47.237133  703653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 21:45:47.247946  703653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1226 21:45:47.269840  703653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 21:45:47.291674  703653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1226 21:45:47.313342  703653 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1226 21:45:47.317861  703653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:45:47.331485  703653 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736 for IP: 192.168.49.2
	I1226 21:45:47.331518  703653 certs.go:190] acquiring lock for shared ca certs: {Name:mke6488a150c186a525017f74b8a69a9f5240d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:47.331655  703653 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key
	I1226 21:45:47.957856  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt ...
	I1226 21:45:47.957886  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt: {Name:mk47f0115b5b2e0f9fb3d82c3586bf65061aba13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:47.958103  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key ...
	I1226 21:45:47.958116  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key: {Name:mkf78405cdbf4f9984f2752ec84f5767189bbbb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:47.958203  703653 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key
	I1226 21:45:48.793651  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt ...
	I1226 21:45:48.793685  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt: {Name:mkfdb1f360b5d2e7d5f43ab0b751b43bd0785f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:48.793879  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key ...
	I1226 21:45:48.793891  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key: {Name:mkdcc5cfb23c652bc0a238c143809b638efe2934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:48.794001  703653 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.key
	I1226 21:45:48.794021  703653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt with IP's: []
	I1226 21:45:49.130100  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt ...
	I1226 21:45:49.130132  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: {Name:mk5b103a47afc40825354234830fdd6d328e23cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.130328  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.key ...
	I1226 21:45:49.130341  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.key: {Name:mk8109095c1de71d7b1e565af62dedeafa19e192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.130979  703653 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key.dd3b5fb2
	I1226 21:45:49.131003  703653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 21:45:49.342573  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt.dd3b5fb2 ...
	I1226 21:45:49.342606  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt.dd3b5fb2: {Name:mkbf4e612869b6431d860f33b33b959adfcdb9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.342798  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key.dd3b5fb2 ...
	I1226 21:45:49.342811  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key.dd3b5fb2: {Name:mkd83808fc939bd68c9f62be01c7ab9dc98abd0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.342902  703653 certs.go:337] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt
	I1226 21:45:49.342981  703653 certs.go:341] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key
	I1226 21:45:49.343036  703653 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.key
	I1226 21:45:49.343051  703653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.crt with IP's: []
	I1226 21:45:49.729468  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.crt ...
	I1226 21:45:49.729499  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.crt: {Name:mk5f2f9075967085d62c91e8f08859c48d8fb037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.729683  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.key ...
	I1226 21:45:49.729697  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.key: {Name:mk6e3ac998613569553a1ffff7932a3627336a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.729906  703653 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem (1675 bytes)
	I1226 21:45:49.729951  703653 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem (1082 bytes)
	I1226 21:45:49.730001  703653 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem (1123 bytes)
	I1226 21:45:49.730030  703653 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem (1679 bytes)
	I1226 21:45:49.730627  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 21:45:49.761140  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1226 21:45:49.791177  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 21:45:49.820829  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1226 21:45:49.849585  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 21:45:49.878912  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 21:45:49.909268  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 21:45:49.941348  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 21:45:49.970836  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 21:45:49.999844  703653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 21:45:50.029395  703653 ssh_runner.go:195] Run: openssl version
	I1226 21:45:50.037542  703653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 21:45:50.050867  703653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:45:50.056091  703653 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:45:50.056176  703653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:45:50.065485  703653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 21:45:50.078013  703653 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 21:45:50.083315  703653 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 21:45:50.083437  703653 kubeadm.go:404] StartCluster: {Name:addons-154736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-154736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:45:50.083527  703653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 21:45:50.083590  703653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 21:45:50.144212  703653 cri.go:89] found id: ""
	I1226 21:45:50.144288  703653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 21:45:50.156065  703653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 21:45:50.167528  703653 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 21:45:50.167619  703653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 21:45:50.179275  703653 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 21:45:50.179343  703653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 21:45:50.239492  703653 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1226 21:45:50.239794  703653 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 21:45:50.287034  703653 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1226 21:45:50.287162  703653 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1226 21:45:50.287219  703653 kubeadm.go:322] OS: Linux
	I1226 21:45:50.287279  703653 kubeadm.go:322] CGROUPS_CPU: enabled
	I1226 21:45:50.287349  703653 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1226 21:45:50.287410  703653 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1226 21:45:50.287480  703653 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1226 21:45:50.287553  703653 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1226 21:45:50.287623  703653 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1226 21:45:50.287682  703653 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1226 21:45:50.287752  703653 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1226 21:45:50.287813  703653 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1226 21:45:50.370303  703653 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 21:45:50.370463  703653 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 21:45:50.370584  703653 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 21:45:50.630713  703653 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 21:45:50.635156  703653 out.go:204]   - Generating certificates and keys ...
	I1226 21:45:50.635285  703653 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 21:45:50.635356  703653 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 21:45:50.944392  703653 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 21:45:51.488209  703653 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 21:45:51.868223  703653 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 21:45:52.038881  703653 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 21:45:52.370275  703653 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 21:45:52.370434  703653 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-154736 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 21:45:52.635129  703653 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 21:45:52.635292  703653 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-154736 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 21:45:53.661223  703653 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 21:45:54.290814  703653 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 21:45:54.651197  703653 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 21:45:54.651394  703653 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 21:45:55.134969  703653 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 21:45:55.322944  703653 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 21:45:55.759311  703653 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 21:45:55.948385  703653 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 21:45:55.948895  703653 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 21:45:55.953472  703653 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 21:45:55.955844  703653 out.go:204]   - Booting up control plane ...
	I1226 21:45:55.955952  703653 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 21:45:55.956032  703653 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 21:45:55.957069  703653 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 21:45:55.967769  703653 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 21:45:55.968989  703653 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 21:45:55.969050  703653 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 21:45:56.074927  703653 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 21:46:03.077104  703653 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002309 seconds
	I1226 21:46:03.077227  703653 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 21:46:03.091854  703653 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 21:46:03.619549  703653 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 21:46:03.619738  703653 kubeadm.go:322] [mark-control-plane] Marking the node addons-154736 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 21:46:04.130928  703653 kubeadm.go:322] [bootstrap-token] Using token: 08smwl.lifk3a8mo3dqg185
	I1226 21:46:04.132945  703653 out.go:204]   - Configuring RBAC rules ...
	I1226 21:46:04.133066  703653 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 21:46:04.138717  703653 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 21:46:04.147006  703653 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 21:46:04.150888  703653 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 21:46:04.154871  703653 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 21:46:04.160999  703653 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 21:46:04.172410  703653 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 21:46:04.397107  703653 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 21:46:04.545271  703653 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 21:46:04.546497  703653 kubeadm.go:322] 
	I1226 21:46:04.546573  703653 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 21:46:04.546583  703653 kubeadm.go:322] 
	I1226 21:46:04.546657  703653 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 21:46:04.546666  703653 kubeadm.go:322] 
	I1226 21:46:04.546691  703653 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 21:46:04.546750  703653 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 21:46:04.546804  703653 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 21:46:04.546813  703653 kubeadm.go:322] 
	I1226 21:46:04.546864  703653 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1226 21:46:04.546872  703653 kubeadm.go:322] 
	I1226 21:46:04.546917  703653 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 21:46:04.546925  703653 kubeadm.go:322] 
	I1226 21:46:04.546975  703653 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 21:46:04.547049  703653 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 21:46:04.547137  703653 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 21:46:04.547146  703653 kubeadm.go:322] 
	I1226 21:46:04.547225  703653 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 21:46:04.547301  703653 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 21:46:04.547309  703653 kubeadm.go:322] 
	I1226 21:46:04.547388  703653 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 08smwl.lifk3a8mo3dqg185 \
	I1226 21:46:04.547489  703653 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 \
	I1226 21:46:04.547515  703653 kubeadm.go:322] 	--control-plane 
	I1226 21:46:04.547525  703653 kubeadm.go:322] 
	I1226 21:46:04.547605  703653 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 21:46:04.547613  703653 kubeadm.go:322] 
	I1226 21:46:04.547691  703653 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 08smwl.lifk3a8mo3dqg185 \
	I1226 21:46:04.547788  703653 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 
	I1226 21:46:04.550011  703653 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1226 21:46:04.550125  703653 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 21:46:04.550144  703653 cni.go:84] Creating CNI manager for ""
	I1226 21:46:04.550153  703653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:46:04.552156  703653 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 21:46:04.553984  703653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 21:46:04.565752  703653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 21:46:04.565773  703653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 21:46:04.619814  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 21:46:05.498806  703653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 21:46:05.498945  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:05.499035  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=addons-154736 minikube.k8s.io/updated_at=2023_12_26T21_46_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:05.656110  703653 ops.go:34] apiserver oom_adj: -16
	I1226 21:46:05.656189  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:06.157100  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:06.656948  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:07.157012  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:07.657245  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:08.156650  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:08.656541  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:09.157216  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:09.656418  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:10.156767  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:10.656764  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:11.156322  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:11.656675  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:12.156530  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:12.656297  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:13.156968  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:13.657245  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:14.156642  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:14.656931  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:15.156946  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:15.656910  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:16.157085  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:16.656471  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:17.156278  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:17.656294  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:17.768936  703653 kubeadm.go:1088] duration metric: took 12.270036305s to wait for elevateKubeSystemPrivileges.
	I1226 21:46:17.768962  703653 kubeadm.go:406] StartCluster complete in 27.685530957s
	I1226 21:46:17.768979  703653 settings.go:142] acquiring lock: {Name:mk1b89d623875ac96830001bdd0fc2b8d8c10aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:46:17.769094  703653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 21:46:17.769489  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/kubeconfig: {Name:mk171fc32e21f516abb68bc5ebeb628b3c1d7f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:46:17.770221  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 21:46:17.770499  703653 config.go:182] Loaded profile config "addons-154736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:46:17.770617  703653 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I1226 21:46:17.770691  703653 addons.go:69] Setting yakd=true in profile "addons-154736"
	I1226 21:46:17.770708  703653 addons.go:237] Setting addon yakd=true in "addons-154736"
	I1226 21:46:17.770740  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.771199  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.772718  703653 addons.go:69] Setting cloud-spanner=true in profile "addons-154736"
	I1226 21:46:17.772750  703653 addons.go:237] Setting addon cloud-spanner=true in "addons-154736"
	I1226 21:46:17.772735  703653 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-154736"
	I1226 21:46:17.772791  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.772832  703653 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-154736"
	I1226 21:46:17.772893  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.773203  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.773403  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.772728  703653 addons.go:69] Setting metrics-server=true in profile "addons-154736"
	I1226 21:46:17.773824  703653 addons.go:237] Setting addon metrics-server=true in "addons-154736"
	I1226 21:46:17.773873  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.774269  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.779281  703653 addons.go:69] Setting default-storageclass=true in profile "addons-154736"
	I1226 21:46:17.779314  703653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-154736"
	I1226 21:46:17.779643  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.792607  703653 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-154736"
	I1226 21:46:17.792651  703653 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-154736"
	I1226 21:46:17.792699  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.793150  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.804697  703653 addons.go:69] Setting gcp-auth=true in profile "addons-154736"
	I1226 21:46:17.804964  703653 mustload.go:65] Loading cluster: addons-154736
	I1226 21:46:17.805721  703653 addons.go:69] Setting registry=true in profile "addons-154736"
	I1226 21:46:17.805750  703653 addons.go:237] Setting addon registry=true in "addons-154736"
	I1226 21:46:17.805792  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.806207  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.806448  703653 config.go:182] Loaded profile config "addons-154736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:46:17.806745  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.828753  703653 addons.go:69] Setting storage-provisioner=true in profile "addons-154736"
	I1226 21:46:17.828787  703653 addons.go:237] Setting addon storage-provisioner=true in "addons-154736"
	I1226 21:46:17.828833  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.829266  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.804878  703653 addons.go:69] Setting ingress=true in profile "addons-154736"
	I1226 21:46:17.840652  703653 addons.go:237] Setting addon ingress=true in "addons-154736"
	I1226 21:46:17.840738  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.843946  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.851917  703653 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-154736"
	I1226 21:46:17.851955  703653 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-154736"
	I1226 21:46:17.852399  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.804889  703653 addons.go:69] Setting ingress-dns=true in profile "addons-154736"
	I1226 21:46:17.852637  703653 addons.go:237] Setting addon ingress-dns=true in "addons-154736"
	I1226 21:46:17.852715  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.853125  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.874589  703653 addons.go:69] Setting volumesnapshots=true in profile "addons-154736"
	I1226 21:46:17.874671  703653 addons.go:237] Setting addon volumesnapshots=true in "addons-154736"
	I1226 21:46:17.874749  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.878996  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.804896  703653 addons.go:69] Setting inspektor-gadget=true in profile "addons-154736"
	I1226 21:46:17.895266  703653 addons.go:237] Setting addon inspektor-gadget=true in "addons-154736"
	I1226 21:46:17.895344  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.895877  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:18.007568  703653 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I1226 21:46:18.026335  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1226 21:46:18.026501  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1226 21:46:18.026615  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.046125  703653 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I1226 21:46:18.026045  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:18.062757  703653 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I1226 21:46:18.066821  703653 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1226 21:46:18.064795  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1226 21:46:18.064803  703653 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1226 21:46:18.064807  703653 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1226 21:46:18.064821  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1226 21:46:18.067197  703653 addons.go:237] Setting addon default-storageclass=true in "addons-154736"
	I1226 21:46:18.073344  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:18.073857  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:18.076019  703653 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1226 21:46:18.076045  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1226 21:46:18.076108  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.074344  703653 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1226 21:46:18.076137  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1226 21:46:18.076187  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.104644  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1226 21:46:18.074535  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.094767  703653 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-154736"
	I1226 21:46:18.106659  703653 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1226 21:46:18.111864  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:18.113291  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1226 21:46:18.125892  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:18.126055  703653 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1226 21:46:18.126072  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1226 21:46:18.126126  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.125904  703653 out.go:177]   - Using image docker.io/registry:2.8.3
	I1226 21:46:18.156725  703653 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I1226 21:46:18.156754  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1226 21:46:18.156821  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.167971  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1226 21:46:18.172167  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1226 21:46:18.174496  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1226 21:46:18.176642  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1226 21:46:18.179637  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1226 21:46:18.182306  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1226 21:46:18.182372  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1226 21:46:18.182473  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.193072  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1226 21:46:18.165581  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 21:46:18.196363  703653 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1226 21:46:18.196409  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1226 21:46:18.202329  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1226 21:46:18.202409  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.202582  703653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I1226 21:46:18.204409  703653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:46:18.202750  703653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 21:46:18.210421  703653 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1226 21:46:18.210452  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1226 21:46:18.210518  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.238353  703653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:46:18.240718  703653 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 21:46:18.240741  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1226 21:46:18.240812  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.259800  703653 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:46:18.259864  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 21:46:18.259969  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.284670  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.284999  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.322452  703653 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 21:46:18.322479  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 21:46:18.322539  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.383702  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.385611  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.397710  703653 out.go:177]   - Using image docker.io/busybox:stable
	I1226 21:46:18.399834  703653 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1226 21:46:18.402027  703653 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1226 21:46:18.402047  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1226 21:46:18.402115  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.418660  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.440650  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.469494  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.480016  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.481628  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.493844  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.529486  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.536621  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.560292  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.719345  703653 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-154736" context rescaled to 1 replicas
	I1226 21:46:18.719381  703653 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 21:46:18.721615  703653 out.go:177] * Verifying Kubernetes components...
	I1226 21:46:18.723683  703653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:46:18.823973  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1226 21:46:18.823993  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1226 21:46:18.843133  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1226 21:46:18.886287  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:46:18.890874  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 21:46:18.894516  703653 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I1226 21:46:18.894586  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1226 21:46:18.927177  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1226 21:46:18.970962  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1226 21:46:18.986951  703653 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1226 21:46:18.987022  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1226 21:46:19.021253  703653 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1226 21:46:19.021322  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1226 21:46:19.028642  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1226 21:46:19.028712  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1226 21:46:19.060764  703653 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1226 21:46:19.060835  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1226 21:46:19.064691  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1226 21:46:19.064761  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1226 21:46:19.067867  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 21:46:19.074108  703653 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1226 21:46:19.074185  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1226 21:46:19.149129  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1226 21:46:19.173549  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1226 21:46:19.173622  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1226 21:46:19.181331  703653 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1226 21:46:19.181402  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1226 21:46:19.204375  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1226 21:46:19.247997  703653 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1226 21:46:19.248067  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1226 21:46:19.252641  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1226 21:46:19.252714  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1226 21:46:19.257178  703653 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I1226 21:46:19.257247  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1226 21:46:19.375270  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1226 21:46:19.375338  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1226 21:46:19.411800  703653 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1226 21:46:19.411874  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1226 21:46:19.474697  703653 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1226 21:46:19.474770  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1226 21:46:19.478562  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1226 21:46:19.478631  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1226 21:46:19.487788  703653 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1226 21:46:19.487864  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1226 21:46:19.607392  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1226 21:46:19.643328  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1226 21:46:19.662535  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1226 21:46:19.662607  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1226 21:46:19.682945  703653 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1226 21:46:19.683021  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1226 21:46:19.721905  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1226 21:46:19.721979  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1226 21:46:19.783208  703653 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:46:19.783276  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1226 21:46:19.813062  703653 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1226 21:46:19.813122  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1226 21:46:19.875998  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:46:19.892038  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1226 21:46:19.892108  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1226 21:46:19.898813  703653 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I1226 21:46:19.898880  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1226 21:46:19.970604  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1226 21:46:19.970675  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1226 21:46:19.979157  703653 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1226 21:46:19.979228  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1226 21:46:20.068804  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1226 21:46:20.068877  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1226 21:46:20.071774  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1226 21:46:20.223830  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1226 21:46:20.223899  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1226 21:46:20.393097  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1226 21:46:20.393163  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1226 21:46:20.525129  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1226 21:46:20.525203  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1226 21:46:20.653419  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1226 21:46:20.842620  703653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.646185588s)
	I1226 21:46:20.842696  703653 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1226 21:46:20.842737  703653 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.119032448s)
	I1226 21:46:20.843712  703653 node_ready.go:35] waiting up to 6m0s for node "addons-154736" to be "Ready" ...
	I1226 21:46:22.115492  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.272322247s)
	I1226 21:46:22.929290  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:23.873940  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.987579254s)
	I1226 21:46:23.874031  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.983094048s)
	I1226 21:46:23.874054  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.946818712s)
	I1226 21:46:23.899655  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.928558019s)
	W1226 21:46:23.935588  703653 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1226 21:46:24.549723  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.481784866s)
	I1226 21:46:24.550248  703653 addons.go:473] Verifying addon ingress=true in "addons-154736"
	I1226 21:46:24.550339  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.34541146s)
	I1226 21:46:24.550369  703653 addons.go:473] Verifying addon registry=true in "addons-154736"
	I1226 21:46:24.552781  703653 out.go:177] * Verifying ingress addon...
	I1226 21:46:24.555640  703653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1226 21:46:24.552845  703653 out.go:177] * Verifying registry addon...
	I1226 21:46:24.549972  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.90658226s)
	I1226 21:46:24.550052  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.673983728s)
	I1226 21:46:24.550108  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.478251658s)
	I1226 21:46:24.549829  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.400630337s)
	I1226 21:46:24.549929  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.942445076s)
	I1226 21:46:24.557896  703653 addons.go:473] Verifying addon metrics-server=true in "addons-154736"
	I1226 21:46:24.558715  703653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1226 21:46:24.560838  703653 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-154736 service yakd-dashboard -n yakd-dashboard
	
	
	W1226 21:46:24.559008  703653 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1226 21:46:24.562704  703653 retry.go:31] will retry after 132.401694ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1226 21:46:24.569656  703653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1226 21:46:24.569680  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:24.574284  703653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1226 21:46:24.574358  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:24.695895  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:46:24.956686  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.303171454s)
	I1226 21:46:24.956765  703653 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-154736"
	I1226 21:46:24.960028  703653 out.go:177] * Verifying csi-hostpath-driver addon...
	I1226 21:46:24.963091  703653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1226 21:46:24.975073  703653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1226 21:46:24.975149  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:25.071087  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:25.079722  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:25.350710  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:25.467870  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:25.561828  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:25.571223  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:25.970056  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:26.065018  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:26.068043  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:26.205366  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.509380595s)
	I1226 21:46:26.468713  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:26.570044  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:26.584924  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:26.924708  703653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1226 21:46:26.924807  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:26.956235  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:26.968334  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:27.062289  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:27.065379  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:27.146887  703653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1226 21:46:27.172562  703653 addons.go:237] Setting addon gcp-auth=true in "addons-154736"
	I1226 21:46:27.172661  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:27.173152  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:27.207957  703653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1226 21:46:27.208012  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:27.246139  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:27.365287  703653 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1226 21:46:27.367384  703653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:46:27.369310  703653 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1226 21:46:27.369365  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1226 21:46:27.429064  703653 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1226 21:46:27.429087  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1226 21:46:27.469042  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:27.489750  703653 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1226 21:46:27.489813  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1226 21:46:27.541813  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1226 21:46:27.562086  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:27.564702  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:27.847272  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:27.968383  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:28.062452  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:28.067190  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:28.469043  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:28.612064  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:28.613425  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:28.694641  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.152734381s)
	I1226 21:46:28.697723  703653 addons.go:473] Verifying addon gcp-auth=true in "addons-154736"
	I1226 21:46:28.699657  703653 out.go:177] * Verifying gcp-auth addon...
	I1226 21:46:28.702193  703653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1226 21:46:28.713134  703653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1226 21:46:28.713169  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:28.968083  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:29.062578  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:29.065752  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:29.206575  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:29.467801  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:29.562917  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:29.566551  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:29.706567  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:29.847539  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:29.969640  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:30.065196  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:30.065821  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:30.207650  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:30.468600  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:30.559955  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:30.563229  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:30.706713  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:30.967667  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:31.060215  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:31.063555  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:31.206014  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:31.467822  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:31.560684  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:31.564582  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:31.706453  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:31.848194  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:31.967625  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:32.071768  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:32.074574  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:32.206257  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:32.467742  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:32.560210  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:32.564370  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:32.706417  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:32.968484  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:33.059977  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:33.063079  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:33.206191  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:33.469654  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:33.560128  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:33.563933  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:33.706915  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:33.968138  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:34.060699  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:34.062966  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:34.205994  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:34.347763  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:34.467659  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:34.560545  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:34.564030  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:34.706956  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:34.967462  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:35.060365  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:35.063174  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:35.206350  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:35.467439  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:35.560949  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:35.563816  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:35.706116  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:35.967771  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:36.060270  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:36.062972  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:36.206749  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:36.347898  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:36.468162  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:36.562270  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:36.565356  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:36.706459  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:36.969062  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:37.060019  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:37.062458  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:37.205865  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:37.468108  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:37.560701  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:37.563249  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:37.706501  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:37.967330  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:38.061409  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:38.064071  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:38.206358  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:38.467649  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:38.560270  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:38.563682  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:38.706163  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:38.847114  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:38.968020  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:39.061302  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:39.062938  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:39.206046  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:39.468201  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:39.560687  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:39.563258  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:39.705944  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:39.967732  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:40.060511  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:40.063552  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:40.206079  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:40.467704  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:40.559890  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:40.562420  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:40.706020  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:40.847265  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:40.967615  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:41.059557  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:41.063115  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:41.206147  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:41.468171  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:41.561080  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:41.563399  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:41.705797  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:41.968183  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:42.060346  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:42.065795  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:42.206857  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:42.467804  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:42.559940  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:42.563378  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:42.706060  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:42.847516  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:42.967877  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:43.061125  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:43.063696  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:43.206173  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:43.468000  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:43.560123  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:43.564203  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:43.706221  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:43.967655  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:44.060179  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:44.063269  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:44.206390  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:44.468741  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:44.560372  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:44.562859  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:44.706366  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:44.847638  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:44.967909  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:45.061583  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:45.064665  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:45.211106  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:45.467461  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:45.560089  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:45.562275  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:45.706638  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:45.970355  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:46.059861  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:46.063618  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:46.206118  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:46.467955  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:46.560833  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:46.563482  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:46.706762  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:46.967932  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:47.060026  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:47.062487  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:47.205482  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:47.348084  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:47.476263  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:47.560890  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:47.564350  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:47.706703  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:47.969547  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:48.060729  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:48.063500  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:48.206175  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:48.468027  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:48.562588  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:48.565212  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:48.706331  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:48.968883  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:49.062420  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:49.067064  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:49.205752  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:49.468835  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:49.560426  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:49.566168  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:49.706795  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:49.847821  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:49.967554  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:50.060438  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:50.063784  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:50.206837  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:50.468711  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:50.568015  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:50.570291  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:50.706000  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:50.968506  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:51.060136  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:51.064069  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:51.205554  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:51.468398  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:51.562146  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:51.562949  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:51.706465  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:51.848005  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:51.967993  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:52.060615  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:52.063507  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:52.206665  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:52.489848  703653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1226 21:46:52.489871  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:52.570538  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:52.575568  703653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1226 21:46:52.575599  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:52.783948  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:52.858988  703653 node_ready.go:49] node "addons-154736" has status "Ready":"True"
	I1226 21:46:52.859014  703653 node_ready.go:38] duration metric: took 32.015238708s waiting for node "addons-154736" to be "Ready" ...
	I1226 21:46:52.859025  703653 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:46:52.873732  703653 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gbz9g" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:52.970570  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:53.062030  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:53.066822  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:53.208151  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:53.479541  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:53.561590  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:53.564565  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:53.706740  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:53.970560  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:54.062379  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:54.070050  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:54.206168  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:54.427418  703653 pod_ready.go:92] pod "coredns-5dd5756b68-gbz9g" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.427443  703653 pod_ready.go:81] duration metric: took 1.55363774s waiting for pod "coredns-5dd5756b68-gbz9g" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.427484  703653 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.441610  703653 pod_ready.go:92] pod "etcd-addons-154736" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.441633  703653 pod_ready.go:81] duration metric: took 14.134232ms waiting for pod "etcd-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.441673  703653 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.475840  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:54.482080  703653 pod_ready.go:92] pod "kube-apiserver-addons-154736" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.482105  703653 pod_ready.go:81] duration metric: took 40.41664ms waiting for pod "kube-apiserver-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.482143  703653 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.503272  703653 pod_ready.go:92] pod "kube-controller-manager-addons-154736" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.503299  703653 pod_ready.go:81] duration metric: took 21.139593ms waiting for pod "kube-controller-manager-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.503315  703653 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4r79z" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.510754  703653 pod_ready.go:92] pod "kube-proxy-4r79z" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.510779  703653 pod_ready.go:81] duration metric: took 7.429869ms waiting for pod "kube-proxy-4r79z" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.510791  703653 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.560768  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:54.569594  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:54.706488  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:54.849037  703653 pod_ready.go:92] pod "kube-scheduler-addons-154736" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.849064  703653 pod_ready.go:81] duration metric: took 338.264308ms waiting for pod "kube-scheduler-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.849077  703653 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.969242  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:55.061146  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:55.067005  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:55.207290  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:55.470788  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:55.564909  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:55.570773  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:55.706289  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:55.976219  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:56.067117  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:56.070953  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:56.207571  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:56.471026  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:56.561105  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:56.565253  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:56.707338  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:56.856778  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:56.969325  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:57.062863  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:57.066909  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:57.206843  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:57.469685  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:57.560698  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:57.565874  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:57.706369  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:57.971579  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:58.067484  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:58.070866  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:58.206809  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:58.469998  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:58.560651  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:58.566168  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:58.705958  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:58.860185  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:58.976070  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:59.072436  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:59.074569  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:59.205965  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:59.475382  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:59.561133  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:59.563634  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:59.706415  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:59.969341  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:00.072603  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:00.073664  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:00.210007  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:00.471336  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:00.561063  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:00.570171  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:00.708851  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:00.970634  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:01.065099  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:01.069820  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:01.207701  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:01.377053  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:01.471027  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:01.561179  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:01.567931  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:01.707290  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:01.969516  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:02.062572  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:02.067470  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:02.206698  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:02.473756  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:02.562936  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:02.568913  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:02.714007  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:02.974098  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:03.060756  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:03.065070  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:03.207705  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:03.469972  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:03.566258  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:03.569802  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:03.707702  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:03.857703  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:03.974229  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:04.065539  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:04.068208  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:04.206996  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:04.470685  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:04.560769  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:04.573550  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:04.707771  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:04.970025  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:05.061588  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:05.077281  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:05.206268  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:05.470787  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:05.562031  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:05.565526  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:05.706928  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:05.974070  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:06.060856  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:06.065192  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:06.205889  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:06.357938  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:06.474136  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:06.561022  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:06.569230  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:06.706481  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:06.970270  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:07.062297  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:07.065987  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:07.206895  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:07.470017  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:07.560625  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:07.563890  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:07.706085  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:07.969070  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:08.062278  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:08.067967  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:08.206596  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:08.515424  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:08.566521  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:08.574832  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:08.713605  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:08.856198  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:08.969419  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:09.061431  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:09.067912  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:09.206807  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:09.469389  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:09.560738  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:09.565395  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:09.706554  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:09.972022  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:10.061966  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:10.065576  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:10.206308  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:10.469790  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:10.561193  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:10.564481  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:10.707575  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:10.857077  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:10.968937  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:11.061350  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:11.067239  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:11.206695  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:11.474233  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:11.561359  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:11.564561  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:11.706552  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:11.969464  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:12.061431  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:12.065839  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:12.205791  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:12.486311  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:12.561067  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:12.564620  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:12.708399  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:12.873072  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:12.971333  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:13.094658  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:13.096399  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:13.208739  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:13.356187  703653 pod_ready.go:92] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"True"
	I1226 21:47:13.356216  703653 pod_ready.go:81] duration metric: took 18.507131159s waiting for pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace to be "Ready" ...
	I1226 21:47:13.356229  703653 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace to be "Ready" ...
	I1226 21:47:13.469532  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:13.561073  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:13.566388  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:13.706438  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:13.970252  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:14.061999  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:14.070026  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:14.207712  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:14.471132  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:14.567489  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:14.570427  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:14.707552  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:14.970629  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:15.062212  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:15.070838  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:15.207228  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:15.363858  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:15.472669  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:15.563808  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:15.568650  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:15.706794  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:15.969450  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:16.064243  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:16.065309  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:16.207222  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:16.473523  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:16.568060  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:16.569013  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:16.706515  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:16.973407  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:17.066957  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:17.072279  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:17.206427  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:17.364884  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:17.468992  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:17.567084  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:17.570624  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:17.706523  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:17.969537  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:18.060154  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:18.064503  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:18.222664  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:18.469733  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:18.571327  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:18.573773  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:18.709089  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:18.968889  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:19.069311  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:19.073568  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:19.207943  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:19.482855  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:19.575162  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:19.575483  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:19.705784  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:19.863199  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:19.970830  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:20.061874  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:20.065796  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:20.206850  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:20.473637  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:20.561375  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:20.565460  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:20.706454  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:20.968707  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:21.061491  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:21.065619  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:21.206526  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:21.469410  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:21.567855  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:21.568668  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:21.714494  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:21.902192  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:21.968787  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:22.061541  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:22.066095  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:22.206091  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:22.492888  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:22.562627  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:22.589676  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:22.718269  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:22.971014  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:23.060948  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:23.070832  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:23.206897  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:23.469906  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:23.566160  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:23.569918  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:23.707051  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:23.970360  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:24.061380  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:24.065995  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:24.207026  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:24.365402  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:24.469724  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:24.561066  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:24.565718  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:24.708651  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:24.974549  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:25.062214  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:25.065091  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:25.205842  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:25.475004  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:25.560266  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:25.564620  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:25.706207  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:25.969910  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:26.061138  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:26.064142  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:26.205918  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:26.469544  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:26.559990  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:26.564544  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:26.706927  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:26.872638  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:26.989402  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:27.061512  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:27.065575  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:27.206308  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:27.470363  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:27.565960  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:27.571508  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:27.709823  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:27.983523  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:28.064834  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:28.069810  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:28.206858  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:28.469981  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:28.561866  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:28.567347  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:28.706372  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:28.969840  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:29.060913  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:29.065572  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:29.216288  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:29.410195  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:29.469260  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:29.565518  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:29.567101  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:29.706298  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:29.969149  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:30.062598  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:30.071250  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:30.206184  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:30.469173  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:30.560713  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:30.563786  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:30.705967  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:30.970274  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:31.061099  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:31.064599  703653 kapi.go:107] duration metric: took 1m6.505878547s to wait for kubernetes.io/minikube-addons=registry ...
	I1226 21:47:31.206762  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:31.469586  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:31.561988  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:31.706767  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:31.864032  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:31.977033  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:32.061252  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:32.206052  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:32.469257  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:32.560678  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:32.706429  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:32.972790  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:33.060407  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:33.208205  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:33.478798  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:33.561123  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:33.713322  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:33.865637  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:33.969626  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:34.068821  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:34.206745  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:34.468862  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:34.561874  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:34.706720  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:34.970112  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:35.061384  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:35.206359  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:35.470876  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:35.560734  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:35.707315  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:35.969191  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:36.061593  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:36.206118  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:36.366788  703653 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"True"
	I1226 21:47:36.366814  703653 pod_ready.go:81] duration metric: took 23.010576239s waiting for pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace to be "Ready" ...
	I1226 21:47:36.366838  703653 pod_ready.go:38] duration metric: took 43.50780045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:47:36.366858  703653 api_server.go:52] waiting for apiserver process to appear ...
	I1226 21:47:36.366895  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1226 21:47:36.366967  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1226 21:47:36.437260  703653 cri.go:89] found id: "c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:47:36.437371  703653 cri.go:89] found id: ""
	I1226 21:47:36.437394  703653 logs.go:284] 1 containers: [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8]
	I1226 21:47:36.437504  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.444644  703653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1226 21:47:36.444799  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1226 21:47:36.470712  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:36.568139  703653 cri.go:89] found id: "a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:47:36.568223  703653 cri.go:89] found id: ""
	I1226 21:47:36.568258  703653 logs.go:284] 1 containers: [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a]
	I1226 21:47:36.568371  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.574008  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:36.589803  703653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1226 21:47:36.589967  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1226 21:47:36.709373  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:36.777925  703653 cri.go:89] found id: "0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:47:36.777994  703653 cri.go:89] found id: ""
	I1226 21:47:36.778023  703653 logs.go:284] 1 containers: [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a]
	I1226 21:47:36.778115  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.786332  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1226 21:47:36.786481  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1226 21:47:36.865357  703653 cri.go:89] found id: "a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:47:36.865381  703653 cri.go:89] found id: ""
	I1226 21:47:36.865389  703653 logs.go:284] 1 containers: [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43]
	I1226 21:47:36.865446  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.871193  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1226 21:47:36.871273  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1226 21:47:36.939797  703653 cri.go:89] found id: "fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:47:36.939824  703653 cri.go:89] found id: ""
	I1226 21:47:36.939833  703653 logs.go:284] 1 containers: [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d]
	I1226 21:47:36.939889  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.950417  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1226 21:47:36.950495  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1226 21:47:36.971592  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:37.019700  703653 cri.go:89] found id: "5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:47:37.019732  703653 cri.go:89] found id: ""
	I1226 21:47:37.019741  703653 logs.go:284] 1 containers: [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee]
	I1226 21:47:37.019810  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:37.044883  703653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1226 21:47:37.044974  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1226 21:47:37.061353  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:37.108196  703653 cri.go:89] found id: "5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:47:37.108272  703653 cri.go:89] found id: ""
	I1226 21:47:37.108293  703653 logs.go:284] 1 containers: [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445]
	I1226 21:47:37.108380  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:37.114078  703653 logs.go:123] Gathering logs for kube-apiserver [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8] ...
	I1226 21:47:37.114151  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:47:37.195220  703653 logs.go:123] Gathering logs for etcd [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a] ...
	I1226 21:47:37.195296  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:47:37.218174  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:37.275411  703653 logs.go:123] Gathering logs for kube-scheduler [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43] ...
	I1226 21:47:37.275488  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:47:37.362244  703653 logs.go:123] Gathering logs for kube-proxy [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d] ...
	I1226 21:47:37.363123  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:47:37.424432  703653 logs.go:123] Gathering logs for container status ...
	I1226 21:47:37.424460  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1226 21:47:37.470454  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:37.495685  703653 logs.go:123] Gathering logs for kubelet ...
	I1226 21:47:37.495714  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1226 21:47:37.555728  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:18 addons-154736 kubelet[1365]: W1226 21:46:18.444235    1365 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.555951  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:18 addons-154736 kubelet[1365]: E1226 21:46:18.444308    1365 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:37.563330  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1226 21:47:37.577596  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.432342    1365 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.577823  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.432378    1365 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.578130  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.434011    1365 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.578343  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.579459  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.579665  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.581033  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.581221  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:37.618607  703653 logs.go:123] Gathering logs for describe nodes ...
	I1226 21:47:37.618646  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1226 21:47:37.707887  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:37.883827  703653 logs.go:123] Gathering logs for kube-controller-manager [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee] ...
	I1226 21:47:37.883860  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:47:37.970815  703653 logs.go:123] Gathering logs for kindnet [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445] ...
	I1226 21:47:37.970891  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:47:37.972427  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:38.083080  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:38.112927  703653 logs.go:123] Gathering logs for CRI-O ...
	I1226 21:47:38.112956  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1226 21:47:38.210766  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:38.227495  703653 logs.go:123] Gathering logs for dmesg ...
	I1226 21:47:38.227530  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1226 21:47:38.260459  703653 logs.go:123] Gathering logs for coredns [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a] ...
	I1226 21:47:38.260489  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:47:38.314608  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:47:38.314638  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1226 21:47:38.314684  703653 out.go:239] X Problems detected in kubelet:
	W1226 21:47:38.314697  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:38.314705  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:38.314715  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:38.314726  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:38.314733  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:38.314742  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:47:38.314748  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:47:38.469819  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:38.562425  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:38.705829  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:38.969304  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:39.072021  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:39.206688  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:39.470181  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:39.570020  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:39.707394  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:39.984670  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:40.065219  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:40.206859  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:40.469223  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:40.561358  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:40.706599  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:40.969985  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:41.061660  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:41.206767  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:41.470553  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:41.561113  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:41.705515  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:41.969382  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:42.061119  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:42.206342  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:42.470436  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:42.563875  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:42.706690  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:42.969264  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:43.065780  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:43.210316  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:43.470323  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:43.561501  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:43.706578  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:43.969896  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:44.062275  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:44.206718  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:44.472904  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:44.560083  703653 kapi.go:107] duration metric: took 1m20.004440781s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1226 21:47:44.705661  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:44.969639  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:45.209224  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:45.471716  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:45.707461  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:45.970535  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:46.206918  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:46.468572  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:46.710653  703653 kapi.go:107] duration metric: took 1m18.008457405s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1226 21:47:46.713128  703653 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-154736 cluster.
	I1226 21:47:46.715469  703653 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1226 21:47:46.717459  703653 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1226 21:47:46.968908  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:47.469921  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:47.970474  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:48.315876  703653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 21:47:48.333253  703653 api_server.go:72] duration metric: took 1m29.613843192s to wait for apiserver process to appear ...
	I1226 21:47:48.333327  703653 api_server.go:88] waiting for apiserver healthz status ...
	I1226 21:47:48.333374  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1226 21:47:48.333520  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1226 21:47:48.387443  703653 cri.go:89] found id: "c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:47:48.387505  703653 cri.go:89] found id: ""
	I1226 21:47:48.387528  703653 logs.go:284] 1 containers: [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8]
	I1226 21:47:48.387614  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.392971  703653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1226 21:47:48.393076  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1226 21:47:48.441764  703653 cri.go:89] found id: "a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:47:48.441787  703653 cri.go:89] found id: ""
	I1226 21:47:48.441795  703653 logs.go:284] 1 containers: [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a]
	I1226 21:47:48.441857  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.446560  703653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1226 21:47:48.446637  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1226 21:47:48.469918  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:48.497664  703653 cri.go:89] found id: "0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:47:48.497689  703653 cri.go:89] found id: ""
	I1226 21:47:48.497698  703653 logs.go:284] 1 containers: [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a]
	I1226 21:47:48.497770  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.503260  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1226 21:47:48.503397  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1226 21:47:48.549508  703653 cri.go:89] found id: "a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:47:48.549586  703653 cri.go:89] found id: ""
	I1226 21:47:48.549618  703653 logs.go:284] 1 containers: [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43]
	I1226 21:47:48.549705  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.561880  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1226 21:47:48.562111  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1226 21:47:48.610492  703653 cri.go:89] found id: "fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:47:48.610522  703653 cri.go:89] found id: ""
	I1226 21:47:48.610531  703653 logs.go:284] 1 containers: [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d]
	I1226 21:47:48.610598  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.615237  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1226 21:47:48.615369  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1226 21:47:48.659894  703653 cri.go:89] found id: "5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:47:48.659921  703653 cri.go:89] found id: ""
	I1226 21:47:48.659929  703653 logs.go:284] 1 containers: [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee]
	I1226 21:47:48.659986  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.664547  703653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1226 21:47:48.664625  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1226 21:47:48.716253  703653 cri.go:89] found id: "5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:47:48.716328  703653 cri.go:89] found id: ""
	I1226 21:47:48.716364  703653 logs.go:284] 1 containers: [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445]
	I1226 21:47:48.716458  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.721160  703653 logs.go:123] Gathering logs for kube-scheduler [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43] ...
	I1226 21:47:48.721186  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:47:48.796955  703653 logs.go:123] Gathering logs for kube-controller-manager [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee] ...
	I1226 21:47:48.796993  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:47:48.939771  703653 logs.go:123] Gathering logs for kindnet [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445] ...
	I1226 21:47:48.939808  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:47:48.973717  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:49.030173  703653 logs.go:123] Gathering logs for container status ...
	I1226 21:47:49.030203  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1226 21:47:49.125812  703653 logs.go:123] Gathering logs for dmesg ...
	I1226 21:47:49.125848  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1226 21:47:49.171784  703653 logs.go:123] Gathering logs for kube-apiserver [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8] ...
	I1226 21:47:49.171827  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:47:49.261006  703653 logs.go:123] Gathering logs for etcd [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a] ...
	I1226 21:47:49.261045  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:47:49.385558  703653 logs.go:123] Gathering logs for coredns [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a] ...
	I1226 21:47:49.385596  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:47:49.470012  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:49.494852  703653 logs.go:123] Gathering logs for kubelet ...
	I1226 21:47:49.494954  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1226 21:47:49.548364  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.432342    1365 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.549166  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.432378    1365 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.549563  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.434011    1365 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.549818  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.551333  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.551609  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.553338  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.553576  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:49.605108  703653 logs.go:123] Gathering logs for describe nodes ...
	I1226 21:47:49.605192  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1226 21:47:49.830170  703653 logs.go:123] Gathering logs for kube-proxy [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d] ...
	I1226 21:47:49.830254  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:47:49.887319  703653 logs.go:123] Gathering logs for CRI-O ...
	I1226 21:47:49.887347  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1226 21:47:49.972589  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:50.005922  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:47:50.005963  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1226 21:47:50.006045  703653 out.go:239] X Problems detected in kubelet:
	W1226 21:47:50.006055  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:50.006062  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:50.006073  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:50.006081  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:50.006087  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:50.006236  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:47:50.006245  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:47:50.469306  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:50.969161  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:51.469541  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:51.976072  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:52.468916  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:52.971505  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:53.471156  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:53.972400  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:54.470053  703653 kapi.go:107] duration metric: took 1m29.506960429s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1226 21:47:54.472743  703653 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, default-storageclass, metrics-server, inspektor-gadget, ingress-dns, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1226 21:47:54.474676  703653 addons.go:508] enable addons completed in 1m36.704070199s: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner default-storageclass metrics-server inspektor-gadget ingress-dns yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1226 21:48:00.012117  703653 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1226 21:48:00.054028  703653 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1226 21:48:00.105602  703653 api_server.go:141] control plane version: v1.28.4
	I1226 21:48:00.105631  703653 api_server.go:131] duration metric: took 11.77228382s to wait for apiserver health ...
	I1226 21:48:00.105641  703653 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 21:48:00.105664  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1226 21:48:00.105734  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1226 21:48:00.205318  703653 cri.go:89] found id: "c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:48:00.205391  703653 cri.go:89] found id: ""
	I1226 21:48:00.205415  703653 logs.go:284] 1 containers: [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8]
	I1226 21:48:00.205524  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.215039  703653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1226 21:48:00.215144  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1226 21:48:00.324722  703653 cri.go:89] found id: "a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:48:00.324761  703653 cri.go:89] found id: ""
	I1226 21:48:00.324771  703653 logs.go:284] 1 containers: [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a]
	I1226 21:48:00.324844  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.334077  703653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1226 21:48:00.334167  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1226 21:48:00.430622  703653 cri.go:89] found id: "0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:48:00.430677  703653 cri.go:89] found id: ""
	I1226 21:48:00.430686  703653 logs.go:284] 1 containers: [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a]
	I1226 21:48:00.430760  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.436429  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1226 21:48:00.436554  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1226 21:48:00.495777  703653 cri.go:89] found id: "a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:48:00.495803  703653 cri.go:89] found id: ""
	I1226 21:48:00.495812  703653 logs.go:284] 1 containers: [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43]
	I1226 21:48:00.495875  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.501338  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1226 21:48:00.501419  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1226 21:48:00.552860  703653 cri.go:89] found id: "fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:48:00.552885  703653 cri.go:89] found id: ""
	I1226 21:48:00.552895  703653 logs.go:284] 1 containers: [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d]
	I1226 21:48:00.552952  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.558338  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1226 21:48:00.558413  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1226 21:48:00.612356  703653 cri.go:89] found id: "5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:48:00.612377  703653 cri.go:89] found id: ""
	I1226 21:48:00.612385  703653 logs.go:284] 1 containers: [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee]
	I1226 21:48:00.612449  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.619281  703653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1226 21:48:00.619395  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1226 21:48:00.662127  703653 cri.go:89] found id: "5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:48:00.662151  703653 cri.go:89] found id: ""
	I1226 21:48:00.662159  703653 logs.go:284] 1 containers: [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445]
	I1226 21:48:00.662229  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.667001  703653 logs.go:123] Gathering logs for kubelet ...
	I1226 21:48:00.667025  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1226 21:48:00.702904  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.432342    1365 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.703173  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.432378    1365 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.703487  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.434011    1365 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.703673  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.705704  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.705960  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.707327  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.707535  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:48:00.754783  703653 logs.go:123] Gathering logs for describe nodes ...
	I1226 21:48:00.754810  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1226 21:48:00.898722  703653 logs.go:123] Gathering logs for coredns [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a] ...
	I1226 21:48:00.898807  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:48:00.944475  703653 logs.go:123] Gathering logs for kube-scheduler [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43] ...
	I1226 21:48:00.944507  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:48:00.998551  703653 logs.go:123] Gathering logs for kindnet [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445] ...
	I1226 21:48:00.998580  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:48:01.044214  703653 logs.go:123] Gathering logs for CRI-O ...
	I1226 21:48:01.044242  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1226 21:48:01.140135  703653 logs.go:123] Gathering logs for container status ...
	I1226 21:48:01.140173  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1226 21:48:01.217748  703653 logs.go:123] Gathering logs for dmesg ...
	I1226 21:48:01.217781  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1226 21:48:01.240763  703653 logs.go:123] Gathering logs for kube-apiserver [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8] ...
	I1226 21:48:01.240795  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:48:01.318123  703653 logs.go:123] Gathering logs for etcd [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a] ...
	I1226 21:48:01.318163  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:48:01.403342  703653 logs.go:123] Gathering logs for kube-proxy [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d] ...
	I1226 21:48:01.403377  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:48:01.445673  703653 logs.go:123] Gathering logs for kube-controller-manager [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee] ...
	I1226 21:48:01.445704  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:48:01.544040  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:48:01.544074  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1226 21:48:01.544133  703653 out.go:239] X Problems detected in kubelet:
	W1226 21:48:01.544145  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:01.544153  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:48:01.544165  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:48:01.544171  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:48:01.544178  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:48:01.544187  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:48:01.544193  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:48:11.554926  703653 system_pods.go:59] 18 kube-system pods found
	I1226 21:48:11.554969  703653 system_pods.go:61] "coredns-5dd5756b68-gbz9g" [7756995c-c766-475d-9528-a269947fb962] Running
	I1226 21:48:11.554976  703653 system_pods.go:61] "csi-hostpath-attacher-0" [27b8fdc0-b1a5-4537-90ed-94f695dc725c] Running
	I1226 21:48:11.554981  703653 system_pods.go:61] "csi-hostpath-resizer-0" [12956ad9-043f-423a-a709-e31bcd813e2c] Running
	I1226 21:48:11.554987  703653 system_pods.go:61] "csi-hostpathplugin-6v6w7" [16d70f46-43bf-4ddd-84fa-27b4cb888c4d] Running
	I1226 21:48:11.554993  703653 system_pods.go:61] "etcd-addons-154736" [547ecee7-8f0e-4964-9a05-a236594fe216] Running
	I1226 21:48:11.554998  703653 system_pods.go:61] "kindnet-5jgmg" [eca9c6b5-b0b8-4bdc-adf8-082992994bf6] Running
	I1226 21:48:11.555010  703653 system_pods.go:61] "kube-apiserver-addons-154736" [34c16ef5-ca23-4cb1-bec3-39f588dca777] Running
	I1226 21:48:11.555016  703653 system_pods.go:61] "kube-controller-manager-addons-154736" [b82dbbab-8430-449d-bdc0-1958eaf7e227] Running
	I1226 21:48:11.555028  703653 system_pods.go:61] "kube-ingress-dns-minikube" [e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1226 21:48:11.555034  703653 system_pods.go:61] "kube-proxy-4r79z" [4d99dd25-dcc5-4774-9ed5-ad626aabfced] Running
	I1226 21:48:11.555050  703653 system_pods.go:61] "kube-scheduler-addons-154736" [6a9cd5cd-d4ac-42d2-a4c7-e14e0a947899] Running
	I1226 21:48:11.555058  703653 system_pods.go:61] "metrics-server-7c66d45ddc-pz8ht" [ff2fdb32-af66-480d-ad25-175b65c5b1d4] Running
	I1226 21:48:11.555066  703653 system_pods.go:61] "nvidia-device-plugin-daemonset-9xfxt" [74fad637-1854-48ce-b606-8a09c28e7cfe] Running
	I1226 21:48:11.555071  703653 system_pods.go:61] "registry-g2w98" [21fa161c-0f99-4fb5-9573-259bd78d21a5] Running
	I1226 21:48:11.555078  703653 system_pods.go:61] "registry-proxy-h7qrg" [274f34a4-99a0-4df2-8e40-73229ad88336] Running
	I1226 21:48:11.555083  703653 system_pods.go:61] "snapshot-controller-58dbcc7b99-rtlzb" [b1add7d4-2504-43e0-83c8-40fc2c220da7] Running
	I1226 21:48:11.555088  703653 system_pods.go:61] "snapshot-controller-58dbcc7b99-wl4bb" [a7f38ca6-3848-4c5b-a7a3-b01da5e90140] Running
	I1226 21:48:11.555092  703653 system_pods.go:61] "storage-provisioner" [f0bcfc9d-7cd8-489e-9d2f-49edc5ce7b5d] Running
	I1226 21:48:11.555099  703653 system_pods.go:74] duration metric: took 11.449451529s to wait for pod list to return data ...
	I1226 21:48:11.555111  703653 default_sa.go:34] waiting for default service account to be created ...
	I1226 21:48:11.557677  703653 default_sa.go:45] found service account: "default"
	I1226 21:48:11.557702  703653 default_sa.go:55] duration metric: took 2.583966ms for default service account to be created ...
	I1226 21:48:11.557712  703653 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 21:48:11.567787  703653 system_pods.go:86] 18 kube-system pods found
	I1226 21:48:11.567826  703653 system_pods.go:89] "coredns-5dd5756b68-gbz9g" [7756995c-c766-475d-9528-a269947fb962] Running
	I1226 21:48:11.567834  703653 system_pods.go:89] "csi-hostpath-attacher-0" [27b8fdc0-b1a5-4537-90ed-94f695dc725c] Running
	I1226 21:48:11.567840  703653 system_pods.go:89] "csi-hostpath-resizer-0" [12956ad9-043f-423a-a709-e31bcd813e2c] Running
	I1226 21:48:11.567846  703653 system_pods.go:89] "csi-hostpathplugin-6v6w7" [16d70f46-43bf-4ddd-84fa-27b4cb888c4d] Running
	I1226 21:48:11.567851  703653 system_pods.go:89] "etcd-addons-154736" [547ecee7-8f0e-4964-9a05-a236594fe216] Running
	I1226 21:48:11.567856  703653 system_pods.go:89] "kindnet-5jgmg" [eca9c6b5-b0b8-4bdc-adf8-082992994bf6] Running
	I1226 21:48:11.567860  703653 system_pods.go:89] "kube-apiserver-addons-154736" [34c16ef5-ca23-4cb1-bec3-39f588dca777] Running
	I1226 21:48:11.567867  703653 system_pods.go:89] "kube-controller-manager-addons-154736" [b82dbbab-8430-449d-bdc0-1958eaf7e227] Running
	I1226 21:48:11.567876  703653 system_pods.go:89] "kube-ingress-dns-minikube" [e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1226 21:48:11.567890  703653 system_pods.go:89] "kube-proxy-4r79z" [4d99dd25-dcc5-4774-9ed5-ad626aabfced] Running
	I1226 21:48:11.567899  703653 system_pods.go:89] "kube-scheduler-addons-154736" [6a9cd5cd-d4ac-42d2-a4c7-e14e0a947899] Running
	I1226 21:48:11.567904  703653 system_pods.go:89] "metrics-server-7c66d45ddc-pz8ht" [ff2fdb32-af66-480d-ad25-175b65c5b1d4] Running
	I1226 21:48:11.567910  703653 system_pods.go:89] "nvidia-device-plugin-daemonset-9xfxt" [74fad637-1854-48ce-b606-8a09c28e7cfe] Running
	I1226 21:48:11.567917  703653 system_pods.go:89] "registry-g2w98" [21fa161c-0f99-4fb5-9573-259bd78d21a5] Running
	I1226 21:48:11.567922  703653 system_pods.go:89] "registry-proxy-h7qrg" [274f34a4-99a0-4df2-8e40-73229ad88336] Running
	I1226 21:48:11.567926  703653 system_pods.go:89] "snapshot-controller-58dbcc7b99-rtlzb" [b1add7d4-2504-43e0-83c8-40fc2c220da7] Running
	I1226 21:48:11.567931  703653 system_pods.go:89] "snapshot-controller-58dbcc7b99-wl4bb" [a7f38ca6-3848-4c5b-a7a3-b01da5e90140] Running
	I1226 21:48:11.567938  703653 system_pods.go:89] "storage-provisioner" [f0bcfc9d-7cd8-489e-9d2f-49edc5ce7b5d] Running
	I1226 21:48:11.567948  703653 system_pods.go:126] duration metric: took 10.227696ms to wait for k8s-apps to be running ...
	I1226 21:48:11.567960  703653 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 21:48:11.568025  703653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:48:11.582548  703653 system_svc.go:56] duration metric: took 14.580186ms WaitForService to wait for kubelet.
	I1226 21:48:11.582578  703653 kubeadm.go:581] duration metric: took 1m52.863172985s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 21:48:11.582598  703653 node_conditions.go:102] verifying NodePressure condition ...
	I1226 21:48:11.586209  703653 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1226 21:48:11.586240  703653 node_conditions.go:123] node cpu capacity is 2
	I1226 21:48:11.586252  703653 node_conditions.go:105] duration metric: took 3.649229ms to run NodePressure ...
	I1226 21:48:11.586264  703653 start.go:228] waiting for startup goroutines ...
	I1226 21:48:11.586271  703653 start.go:233] waiting for cluster config update ...
	I1226 21:48:11.586284  703653 start.go:242] writing updated cluster config ...
	I1226 21:48:11.586574  703653 ssh_runner.go:195] Run: rm -f paused
	I1226 21:48:11.914611  703653 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1226 21:48:11.917788  703653 out.go:177] * Done! kubectl is now configured to use "addons-154736" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 26 21:55:18 addons-154736 crio[897]: time="2023-12-26 21:55:18.486405855Z" level=info msg="Image docker.io/nginx:alpine not found" id=f7bdba2b-2092-4536-8063-cd1d2f8579f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:18 addons-154736 crio[897]: time="2023-12-26 21:55:18.487300521Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=2cc72c56-db56-4a69-b1ab-1dd3fe583efc name=/runtime.v1.ImageService/PullImage
	Dec 26 21:55:18 addons-154736 crio[897]: time="2023-12-26 21:55:18.489340885Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 26 21:55:29 addons-154736 crio[897]: time="2023-12-26 21:55:29.486675710Z" level=info msg="Checking image status: docker.io/nginx:latest" id=df47c617-8688-42fc-a9d4-cc6550adccd3 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:29 addons-154736 crio[897]: time="2023-12-26 21:55:29.486900345Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026 docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684],Size_:196113558,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=df47c617-8688-42fc-a9d4-cc6550adccd3 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:44 addons-154736 crio[897]: time="2023-12-26 21:55:44.487371609Z" level=info msg="Checking image status: docker.io/nginx:latest" id=167ca2a6-35c9-41d9-b172-18ba27fde065 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:44 addons-154736 crio[897]: time="2023-12-26 21:55:44.487600215Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026 docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684],Size_:196113558,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=167ca2a6-35c9-41d9-b172-18ba27fde065 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:58 addons-154736 crio[897]: time="2023-12-26 21:55:58.487183464Z" level=info msg="Checking image status: docker.io/nginx:latest" id=2218b318-eda8-483a-bc1b-1596be1e7c1e name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:58 addons-154736 crio[897]: time="2023-12-26 21:55:58.487407951Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026 docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684],Size_:196113558,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2218b318-eda8-483a-bc1b-1596be1e7c1e name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:58 addons-154736 crio[897]: time="2023-12-26 21:55:58.488926357Z" level=info msg="Pulling image: docker.io/nginx:latest" id=3378fbcf-6532-44db-aabc-e82fb5299276 name=/runtime.v1.ImageService/PullImage
	Dec 26 21:55:58 addons-154736 crio[897]: time="2023-12-26 21:55:58.491290233Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 26 21:56:00 addons-154736 crio[897]: time="2023-12-26 21:56:00.489222771Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c949f247-f9c2-4660-8847-07a9e2a62a50 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:00 addons-154736 crio[897]: time="2023-12-26 21:56:00.489597408Z" level=info msg="Image docker.io/nginx:alpine not found" id=c949f247-f9c2-4660-8847-07a9e2a62a50 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:04 addons-154736 crio[897]: time="2023-12-26 21:56:04.528336492Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=fd6d83ee-186e-4060-818c-2c3082007e75 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:04 addons-154736 crio[897]: time="2023-12-26 21:56:04.528700004Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6 registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097],Size_:520014,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=fd6d83ee-186e-4060-818c-2c3082007e75 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:14 addons-154736 crio[897]: time="2023-12-26 21:56:14.486738386Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=6db9d0f3-8531-4796-9550-d72e60434936 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:14 addons-154736 crio[897]: time="2023-12-26 21:56:14.486999967Z" level=info msg="Image docker.io/nginx:alpine not found" id=6db9d0f3-8531-4796-9550-d72e60434936 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:29 addons-154736 crio[897]: time="2023-12-26 21:56:29.486908065Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=89bc51ff-54a5-467e-a9bb-1a5fa980ac45 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:29 addons-154736 crio[897]: time="2023-12-26 21:56:29.487165306Z" level=info msg="Image docker.io/nginx:alpine not found" id=89bc51ff-54a5-467e-a9bb-1a5fa980ac45 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:42 addons-154736 crio[897]: time="2023-12-26 21:56:42.486635188Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=86d3ea47-e5d0-43ef-9da0-949ba58fbdd7 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:42 addons-154736 crio[897]: time="2023-12-26 21:56:42.486935357Z" level=info msg="Image docker.io/nginx:alpine not found" id=86d3ea47-e5d0-43ef-9da0-949ba58fbdd7 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:56 addons-154736 crio[897]: time="2023-12-26 21:56:56.487317175Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e72d72e5-97dc-460b-9e1f-02f3d96f755f name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:56 addons-154736 crio[897]: time="2023-12-26 21:56:56.487539168Z" level=info msg="Image docker.io/nginx:alpine not found" id=e72d72e5-97dc-460b-9e1f-02f3d96f755f name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:57 addons-154736 crio[897]: time="2023-12-26 21:56:57.486980106Z" level=info msg="Checking image status: docker.io/nginx:latest" id=c9cddc1d-2945-409d-87f2-c961ff98295a name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:56:57 addons-154736 crio[897]: time="2023-12-26 21:56:57.487212248Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026 docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684],Size_:196113558,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c9cddc1d-2945-409d-87f2-c961ff98295a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	5fe6705eae1fb       1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a                                                                             3 minutes ago       Exited              minikube-ingress-dns                     6                   01202b99ec6d0       kube-ingress-dns-minikube
	abb80e1df162f       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                                        8 minutes ago       Running             headlamp                                 0                   7ca1bd6e6c1a2       headlamp-7ddfbb94ff-qntlc
	d649a08406e0b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	a2e9c531dfae6       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          9 minutes ago       Running             csi-provisioner                          0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	c46e1aaf747f5       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            9 minutes ago       Running             liveness-probe                           0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	5736264be5277       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           9 minutes ago       Running             hostpath                                 0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	426fb5db606fa       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 9 minutes ago       Running             gcp-auth                                 0                   9bd4cf657b01b       gcp-auth-d4c87556c-c9kbx
	165279e6203ce       registry.k8s.io/ingress-nginx/controller@sha256:1ca66aa9f7f8fdecbecc88e4b89f0f4e7a1f1e952d0d5e52df2524e526259f6b                             9 minutes ago       Running             controller                               0                   4a0fbbb610f9c       ingress-nginx-controller-69cff4fd79-rqdlh
	c9f32bcae8b00       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                9 minutes ago       Running             node-driver-registrar                    0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	b684e7784c70e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   9 minutes ago       Exited              patch                                    0                   4ea53d7234716       ingress-nginx-admission-patch-gwrdr
	af22d1e4dbcc7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   9 minutes ago       Exited              create                                   0                   d92ec2169085e       ingress-nginx-admission-create-jtzt2
	1bc6c433ebb20       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   9 minutes ago       Running             csi-external-health-monitor-controller   0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	9ebfe5e4c3c95       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             9 minutes ago       Running             csi-attacher                             0                   6403bae626ef6       csi-hostpath-attacher-0
	de059b46043d6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              9 minutes ago       Running             csi-resizer                              0                   95e515ab13810       csi-hostpath-resizer-0
	7d0d63bb665e8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             9 minutes ago       Running             local-path-provisioner                   0                   3c9b34a725c2d       local-path-provisioner-78b46b4d5c-nwr5l
	ce1e295decd3a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      9 minutes ago       Running             volume-snapshot-controller               0                   01b7c76bc0451       snapshot-controller-58dbcc7b99-wl4bb
	8716dbfa389d1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      10 minutes ago      Running             volume-snapshot-controller               0                   d160ae65c8530       snapshot-controller-58dbcc7b99-rtlzb
	aea4da7d2eeb6       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              10 minutes ago      Running             yakd                                     0                   4f2b49cd1d1cc       yakd-dashboard-9947fc6bf-5ggjq
	2ca195417d20c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             10 minutes ago      Running             storage-provisioner                      0                   c3b7a1e4b36d7       storage-provisioner
	0b9784687fdf8       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             10 minutes ago      Running             coredns                                  0                   eb0ca42f98aa6       coredns-5dd5756b68-gbz9g
	fc7c1d4cc434f       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                                             10 minutes ago      Running             kube-proxy                               0                   ecfe1e6ef509b       kube-proxy-4r79z
	5f14597475dee       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                                             10 minutes ago      Running             kindnet-cni                              0                   9ab992efa85d9       kindnet-5jgmg
	c5b1b0ac08cda       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                                             11 minutes ago      Running             kube-apiserver                           0                   931df03571461       kube-apiserver-addons-154736
	5f4d17cd1a759       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                                             11 minutes ago      Running             kube-controller-manager                  0                   2627c20b7662e       kube-controller-manager-addons-154736
	a1a3df534703e       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                                             11 minutes ago      Running             kube-scheduler                           0                   ec5babc1f5d4e       kube-scheduler-addons-154736
	a00bf48419309       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                                             11 minutes ago      Running             etcd                                     0                   7d1113e33d739       etcd-addons-154736
	
	
	==> coredns [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a] <==
	[INFO] 10.244.0.17:56311 - 15111 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002393906s
	[INFO] 10.244.0.17:51530 - 56241 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175454s
	[INFO] 10.244.0.17:51530 - 12478 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000167824s
	[INFO] 10.244.0.17:39191 - 1431 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152177s
	[INFO] 10.244.0.17:39191 - 26810 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000257677s
	[INFO] 10.244.0.17:48634 - 5977 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094922s
	[INFO] 10.244.0.17:48634 - 31128 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000201201s
	[INFO] 10.244.0.17:42129 - 3593 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112933s
	[INFO] 10.244.0.17:42129 - 27143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000175676s
	[INFO] 10.244.0.17:38550 - 52237 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001772128s
	[INFO] 10.244.0.17:38550 - 50184 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001680774s
	[INFO] 10.244.0.17:43293 - 31812 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000078431s
	[INFO] 10.244.0.17:43293 - 9529 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118701s
	[INFO] 10.244.0.20:60042 - 54775 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264445s
	[INFO] 10.244.0.20:36194 - 51775 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017314s
	[INFO] 10.244.0.20:34311 - 42203 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161137s
	[INFO] 10.244.0.20:47094 - 4881 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011441s
	[INFO] 10.244.0.20:40589 - 5895 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000281774s
	[INFO] 10.244.0.20:42712 - 53585 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000328559s
	[INFO] 10.244.0.20:45825 - 51530 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002334691s
	[INFO] 10.244.0.20:60259 - 40603 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002397369s
	[INFO] 10.244.0.20:47110 - 43627 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000740267s
	[INFO] 10.244.0.20:50917 - 58925 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.004757339s
	[INFO] 10.244.0.22:47955 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000184398s
	[INFO] 10.244.0.22:42660 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128588s
	
	
	==> describe nodes <==
	Name:               addons-154736
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-154736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=addons-154736
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T21_46_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-154736
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-154736"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 21:46:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-154736
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 21:56:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 21:54:13 +0000   Tue, 26 Dec 2023 21:45:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 21:54:13 +0000   Tue, 26 Dec 2023 21:45:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 21:54:13 +0000   Tue, 26 Dec 2023 21:45:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 21:54:13 +0000   Tue, 26 Dec 2023 21:46:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-154736
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfc7b39add434585b250c10345c20f17
	  System UUID:                04713493-cfac-4455-8894-dae1076e6bc4
	  Boot ID:                    f8f887b2-8c20-433d-a967-90e814370f09
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  gcp-auth                    gcp-auth-d4c87556c-c9kbx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  headlamp                    headlamp-7ddfbb94ff-qntlc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m48s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-rqdlh    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-gbz9g                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 csi-hostpathplugin-6v6w7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-addons-154736                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-5jgmg                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-addons-154736                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-addons-154736        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-4r79z                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-addons-154736                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 snapshot-controller-58dbcc7b99-rtlzb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 snapshot-controller-58dbcc7b99-wl4bb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  local-path-storage          local-path-provisioner-78b46b4d5c-nwr5l      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  yakd-dashboard              yakd-dashboard-9947fc6bf-5ggjq               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             438Mi (5%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-154736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-154736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-154736 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m   node-controller  Node addons-154736 event: Registered Node addons-154736 in Controller
	  Normal  NodeReady                10m   kubelet          Node addons-154736 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001114] FS-Cache: O-key=[8] '635f3b0000000000'
	[  +0.000763] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001031] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=000000000db3c1b7
	[  +0.001157] FS-Cache: N-key=[8] '635f3b0000000000'
	[  +0.002874] FS-Cache: Duplicate cookie detected
	[  +0.000764] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001117] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=000000007ac7c815
	[  +0.001084] FS-Cache: O-key=[8] '635f3b0000000000'
	[  +0.000742] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001038] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000328509c1
	[  +0.001125] FS-Cache: N-key=[8] '635f3b0000000000'
	[  +2.220713] FS-Cache: Duplicate cookie detected
	[  +0.000749] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001122] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=00000000ebeba0e0
	[  +0.001200] FS-Cache: O-key=[8] '615f3b0000000000'
	[  +0.000765] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=000000008353ea7f
	[  +0.001072] FS-Cache: N-key=[8] '615f3b0000000000'
	[  +0.309997] FS-Cache: Duplicate cookie detected
	[  +0.000749] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001114] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=00000000e02b88cc
	[  +0.001198] FS-Cache: O-key=[8] '695f3b0000000000'
	[  +0.000739] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001020] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=000000000db3c1b7
	[  +0.001131] FS-Cache: N-key=[8] '695f3b0000000000'
	
	
	==> etcd [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a] <==
	{"level":"info","ts":"2023-12-26T21:46:19.449152Z","caller":"traceutil/trace.go:171","msg":"trace[612606180] range","detail":"{range_begin:/registry/serviceaccounts/kube-public/; range_end:/registry/serviceaccounts/kube-public0; response_count:1; response_revision:340; }","duration":"167.415706ms","start":"2023-12-26T21:46:19.281716Z","end":"2023-12-26T21:46:19.449131Z","steps":["trace[612606180] 'agreement among raft nodes before linearized reading'  (duration: 153.1419ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:19.435203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.620929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2023-12-26T21:46:19.450364Z","caller":"traceutil/trace.go:171","msg":"trace[1514450604] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:340; }","duration":"168.7533ms","start":"2023-12-26T21:46:19.281574Z","end":"2023-12-26T21:46:19.450327Z","steps":["trace[1514450604] 'agreement among raft nodes before linearized reading'  (duration: 153.599177ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:19.435236Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.87264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-12-26T21:46:19.450684Z","caller":"traceutil/trace.go:171","msg":"trace[844122797] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:340; }","duration":"169.314879ms","start":"2023-12-26T21:46:19.28136Z","end":"2023-12-26T21:46:19.450675Z","steps":["trace[844122797] 'agreement among raft nodes before linearized reading'  (duration: 153.860891ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:22.668146Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.19351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-154736\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2023-12-26T21:46:22.673597Z","caller":"traceutil/trace.go:171","msg":"trace[511291353] range","detail":"{range_begin:/registry/minions/addons-154736; range_end:; response_count:1; response_revision:378; }","duration":"237.638785ms","start":"2023-12-26T21:46:22.435901Z","end":"2023-12-26T21:46:22.673539Z","steps":["trace[511291353] 'agreement among raft nodes before linearized reading'  (duration: 36.731458ms)","trace[511291353] 'range keys from in-memory index tree'  (duration: 195.457558ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:46:22.673853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.04778ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128026081072486522 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:363 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3174 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-26T21:46:22.673947Z","caller":"traceutil/trace.go:171","msg":"trace[1935977781] linearizableReadLoop","detail":"{readStateIndex:388; appliedIndex:387; }","duration":"201.322071ms","start":"2023-12-26T21:46:22.472609Z","end":"2023-12-26T21:46:22.673932Z","steps":["trace[1935977781] 'read index received'  (duration: 419.797µs)","trace[1935977781] 'applied index is now lower than readState.Index'  (duration: 200.901019ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-26T21:46:22.674917Z","caller":"traceutil/trace.go:171","msg":"trace[231094946] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"238.601307ms","start":"2023-12-26T21:46:22.436303Z","end":"2023-12-26T21:46:22.674904Z","steps":["trace[231094946] 'process raft request'  (duration: 36.42707ms)","trace[231094946] 'compare'  (duration: 194.925385ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:46:22.679418Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.401262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/local-path-storage/\" range_end:\"/registry/resourcequotas/local-path-storage0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:46:22.685778Z","caller":"traceutil/trace.go:171","msg":"trace[1635112425] range","detail":"{range_begin:/registry/resourcequotas/local-path-storage/; range_end:/registry/resourcequotas/local-path-storage0; response_count:0; response_revision:379; }","duration":"249.765309ms","start":"2023-12-26T21:46:22.435992Z","end":"2023-12-26T21:46:22.685757Z","steps":["trace[1635112425] 'agreement among raft nodes before linearized reading'  (duration: 238.070382ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:22.687437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.139407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:46:22.692198Z","caller":"traceutil/trace.go:171","msg":"trace[1516003958] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:379; }","duration":"255.897058ms","start":"2023-12-26T21:46:22.43628Z","end":"2023-12-26T21:46:22.692177Z","steps":["trace[1516003958] 'agreement among raft nodes before linearized reading'  (duration: 237.777523ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:22.693456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.419763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/yakd-dashboard/\" range_end:\"/registry/resourcequotas/yakd-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:46:22.737829Z","caller":"traceutil/trace.go:171","msg":"trace[1066999797] range","detail":"{range_begin:/registry/resourcequotas/yakd-dashboard/; range_end:/registry/resourcequotas/yakd-dashboard0; response_count:0; response_revision:382; }","duration":"301.7929ms","start":"2023-12-26T21:46:22.436018Z","end":"2023-12-26T21:46:22.737811Z","steps":["trace[1066999797] 'agreement among raft nodes before linearized reading'  (duration: 238.041861ms)","trace[1066999797] 'get authentication metadata'  (duration: 19.369681ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:46:22.737927Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:46:22.436015Z","time spent":"301.894427ms","remote":"127.0.0.1:36046","response type":"/etcdserverpb.KV/Range","request count":0,"request size":84,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas/yakd-dashboard/\" range_end:\"/registry/resourcequotas/yakd-dashboard0\" "}
	{"level":"warn","ts":"2023-12-26T21:46:22.69355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.58278ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-12-26T21:46:22.738143Z","caller":"traceutil/trace.go:171","msg":"trace[1020394058] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:382; }","duration":"302.170483ms","start":"2023-12-26T21:46:22.435963Z","end":"2023-12-26T21:46:22.738133Z","steps":["trace[1020394058] 'agreement among raft nodes before linearized reading'  (duration: 238.109658ms)","trace[1020394058] 'get authentication metadata'  (duration: 19.456522ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:46:22.738223Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:46:22.435959Z","time spent":"302.253221ms","remote":"127.0.0.1:36074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":375,"request content":"key:\"/registry/namespaces/kube-system\" "}
	{"level":"info","ts":"2023-12-26T21:46:22.687855Z","caller":"traceutil/trace.go:171","msg":"trace[1234581384] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"164.055398ms","start":"2023-12-26T21:46:22.523789Z","end":"2023-12-26T21:46:22.687845Z","steps":["trace[1234581384] 'process raft request'  (duration: 162.503228ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:22.736988Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:46:22.436272Z","time spent":"300.69471ms","remote":"127.0.0.1:36104","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" "}
	{"level":"info","ts":"2023-12-26T21:55:58.409445Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1780}
	{"level":"info","ts":"2023-12-26T21:55:58.439808Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1780,"took":"29.861837ms","hash":107786079}
	{"level":"info","ts":"2023-12-26T21:55:58.439863Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":107786079,"revision":1780,"compact-revision":-1}
	
	
	==> gcp-auth [426fb5db606fa75ce6945ac23ec91db80974534187f20b743958f83105dfbf01] <==
	2023/12/26 21:47:46 GCP Auth Webhook started!
	2023/12/26 21:48:13 Ready to marshal response ...
	2023/12/26 21:48:13 Ready to write response ...
	2023/12/26 21:48:13 Ready to marshal response ...
	2023/12/26 21:48:13 Ready to write response ...
	2023/12/26 21:48:13 Ready to marshal response ...
	2023/12/26 21:48:13 Ready to write response ...
	2023/12/26 21:48:24 Ready to marshal response ...
	2023/12/26 21:48:24 Ready to write response ...
	2023/12/26 21:48:30 Ready to marshal response ...
	2023/12/26 21:48:30 Ready to write response ...
	2023/12/26 21:48:30 Ready to marshal response ...
	2023/12/26 21:48:30 Ready to write response ...
	2023/12/26 21:48:39 Ready to marshal response ...
	2023/12/26 21:48:39 Ready to write response ...
	2023/12/26 21:48:44 Ready to marshal response ...
	2023/12/26 21:48:44 Ready to write response ...
	2023/12/26 21:48:59 Ready to marshal response ...
	2023/12/26 21:48:59 Ready to write response ...
	2023/12/26 21:49:19 Ready to marshal response ...
	2023/12/26 21:49:19 Ready to write response ...
	
	
	==> kernel <==
	 21:57:01 up  5:39,  0 users,  load average: 0.20, 0.55, 1.21
	Linux addons-154736 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445] <==
	I1226 21:54:52.339188       1 main.go:227] handling current node
	I1226 21:55:02.348797       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:55:02.348825       1 main.go:227] handling current node
	I1226 21:55:12.357795       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:55:12.357824       1 main.go:227] handling current node
	I1226 21:55:22.366465       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:55:22.366844       1 main.go:227] handling current node
	I1226 21:55:32.378028       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:55:32.378056       1 main.go:227] handling current node
	I1226 21:55:42.382491       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:55:42.382523       1 main.go:227] handling current node
	I1226 21:55:52.395433       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:55:52.395472       1 main.go:227] handling current node
	I1226 21:56:02.399915       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:56:02.399945       1 main.go:227] handling current node
	I1226 21:56:12.412124       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:56:12.412153       1 main.go:227] handling current node
	I1226 21:56:22.416830       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:56:22.416857       1 main.go:227] handling current node
	I1226 21:56:32.428967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:56:32.428997       1 main.go:227] handling current node
	I1226 21:56:42.434151       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:56:42.434189       1 main.go:227] handling current node
	I1226 21:56:52.445696       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:56:52.445731       1 main.go:227] handling current node
	
	
	==> kube-apiserver [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8] <==
	E1226 21:47:18.183194       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1226 21:47:18.183278       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.226.217:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.226.217:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
	I1226 21:47:18.226035       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1226 21:47:18.235616       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I1226 21:47:18.249146       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1226 21:48:01.090044       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1226 21:48:13.391328       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.241.164"}
	I1226 21:48:47.250109       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1226 21:48:47.272806       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1226 21:48:48.323602       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1226 21:48:57.771535       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1226 21:48:58.917884       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1226 21:48:59.228975       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.161.33"}
	I1226 21:49:19.227715       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1226 21:51:01.378469       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:51:01.378621       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:51:01.378765       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:51:01.378829       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:51:01.378947       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:51:01.379002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:56:01.379493       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:56:01.379564       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:56:01.380091       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:56:01.380146       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee] <==
	I1226 21:49:19.442102       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1226 21:49:22.322988       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:49:22.323018       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:50:08.035478       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:50:08.035517       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:50:43.390905       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:50:43.390938       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:51:20.739347       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:51:20.739382       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:51:55.552410       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:51:55.552446       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:52:31.808159       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:52:31.808199       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:53:02.112640       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:53:02.112676       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:53:35.339875       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:53:35.339909       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:54:32.830341       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:54:32.830378       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:55:17.216364       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:55:17.216397       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:55:55.408594       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:55:55.408634       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:56:42.221032       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:56:42.221164       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d] <==
	I1226 21:46:23.640742       1 server_others.go:69] "Using iptables proxy"
	I1226 21:46:23.780851       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1226 21:46:23.961749       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1226 21:46:23.964285       1 server_others.go:152] "Using iptables Proxier"
	I1226 21:46:23.964388       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1226 21:46:23.964453       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1226 21:46:23.964548       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 21:46:23.964814       1 server.go:846] "Version info" version="v1.28.4"
	I1226 21:46:23.965035       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 21:46:23.965823       1 config.go:188] "Starting service config controller"
	I1226 21:46:23.966279       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 21:46:23.966348       1 config.go:97] "Starting endpoint slice config controller"
	I1226 21:46:23.966380       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 21:46:23.966960       1 config.go:315] "Starting node config controller"
	I1226 21:46:23.967020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 21:46:24.068617       1 shared_informer.go:318] Caches are synced for node config
	I1226 21:46:24.068805       1 shared_informer.go:318] Caches are synced for service config
	I1226 21:46:24.068877       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43] <==
	W1226 21:46:01.499821       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1226 21:46:01.499861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1226 21:46:01.499951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1226 21:46:01.499994       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1226 21:46:01.500095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1226 21:46:01.500137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1226 21:46:01.500229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1226 21:46:01.500269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1226 21:46:01.500416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1226 21:46:01.500460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1226 21:46:01.500581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 21:46:01.500641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1226 21:46:01.500744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1226 21:46:01.500787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1226 21:46:01.500885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 21:46:01.500922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1226 21:46:01.501014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 21:46:01.501056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1226 21:46:01.501147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 21:46:01.501185       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1226 21:46:01.501259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1226 21:46:01.501297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1226 21:46:02.462665       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 21:46:02.462706       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1226 21:46:05.170353       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 26 21:56:04 addons-154736 kubelet[1365]: E1226 21:56:04.702559    1365 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94, memory: /docker/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/system.slice/kubelet.service"
	Dec 26 21:56:04 addons-154736 kubelet[1365]: E1226 21:56:04.712000    1365 manager.go:1106] Failed to create existing container: /crio-e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d: Error finding container e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d: Status 404 returned error can't find the container with id e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d
	Dec 26 21:56:04 addons-154736 kubelet[1365]: E1226 21:56:04.712307    1365 manager.go:1106] Failed to create existing container: /docker/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/crio-e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d: Error finding container e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d: Status 404 returned error can't find the container with id e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d
	Dec 26 21:56:04 addons-154736 kubelet[1365]: E1226 21:56:04.712573    1365 manager.go:1106] Failed to create existing container: /docker/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/crio-d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f: Error finding container d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f: Status 404 returned error can't find the container with id d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f
	Dec 26 21:56:04 addons-154736 kubelet[1365]: E1226 21:56:04.712877    1365 manager.go:1106] Failed to create existing container: /crio-9de0ffc80a888993dda0cf8d7dfbc8404cd150a9b2fabac624d21d9e640d3e38: Error finding container 9de0ffc80a888993dda0cf8d7dfbc8404cd150a9b2fabac624d21d9e640d3e38: Status 404 returned error can't find the container with id 9de0ffc80a888993dda0cf8d7dfbc8404cd150a9b2fabac624d21d9e640d3e38
	Dec 26 21:56:04 addons-154736 kubelet[1365]: E1226 21:56:04.713105    1365 manager.go:1106] Failed to create existing container: /crio-d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f: Error finding container d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f: Status 404 returned error can't find the container with id d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f
	Dec 26 21:56:04 addons-154736 kubelet[1365]: E1226 21:56:04.713322    1365 manager.go:1106] Failed to create existing container: /crio-88bbfca1fd18479c52e5a80c7f33f86b611a658500dfd5677be53da7cb2a5271: Error finding container 88bbfca1fd18479c52e5a80c7f33f86b611a658500dfd5677be53da7cb2a5271: Status 404 returned error can't find the container with id 88bbfca1fd18479c52e5a80c7f33f86b611a658500dfd5677be53da7cb2a5271
	Dec 26 21:56:04 addons-154736 kubelet[1365]: E1226 21:56:04.731977    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/dcc17bb60317014cee8eee7b8a545d5404ff37d371eefccd11eb56d576a69c26/diff" to get inode usage: stat /var/lib/containers/storage/overlay/dcc17bb60317014cee8eee7b8a545d5404ff37d371eefccd11eb56d576a69c26/diff: no such file or directory, extraDiskErr: <nil>
	Dec 26 21:56:12 addons-154736 kubelet[1365]: I1226 21:56:12.486473    1365 scope.go:117] "RemoveContainer" containerID="5fe6705eae1fbc89ec665967bdfcef33c4d35b3192b0d5e54fd47a92656d5772"
	Dec 26 21:56:12 addons-154736 kubelet[1365]: E1226 21:56:12.486738    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89"
	Dec 26 21:56:14 addons-154736 kubelet[1365]: E1226 21:56:14.487577    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="89cc57b4-3f60-4a69-b7d3-dbc25226b9c0"
	Dec 26 21:56:27 addons-154736 kubelet[1365]: I1226 21:56:27.485893    1365 scope.go:117] "RemoveContainer" containerID="5fe6705eae1fbc89ec665967bdfcef33c4d35b3192b0d5e54fd47a92656d5772"
	Dec 26 21:56:27 addons-154736 kubelet[1365]: E1226 21:56:27.486177    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89"
	Dec 26 21:56:29 addons-154736 kubelet[1365]: E1226 21:56:29.487406    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="89cc57b4-3f60-4a69-b7d3-dbc25226b9c0"
	Dec 26 21:56:41 addons-154736 kubelet[1365]: I1226 21:56:41.486682    1365 scope.go:117] "RemoveContainer" containerID="5fe6705eae1fbc89ec665967bdfcef33c4d35b3192b0d5e54fd47a92656d5772"
	Dec 26 21:56:41 addons-154736 kubelet[1365]: E1226 21:56:41.486972    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89"
	Dec 26 21:56:42 addons-154736 kubelet[1365]: E1226 21:56:42.487946    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="89cc57b4-3f60-4a69-b7d3-dbc25226b9c0"
	Dec 26 21:56:42 addons-154736 kubelet[1365]: E1226 21:56:42.951400    1365 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 26 21:56:42 addons-154736 kubelet[1365]: E1226 21:56:42.951459    1365 kuberuntime_image.go:53] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 26 21:56:42 addons-154736 kubelet[1365]: E1226 21:56:42.951565    1365 kuberuntime_manager.go:1261] container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-
6gjg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod-restore_default(207a63c3-f7e0-4270-aa76-3681a7e5658c): ErrImagePull: loading manifest for target platform: reading manifest sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 21:56:42 addons-154736 kubelet[1365]: E1226 21:56:42.951607    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="207a63c3-f7e0-4270-aa76-3681a7e5658c"
	Dec 26 21:56:56 addons-154736 kubelet[1365]: I1226 21:56:56.486545    1365 scope.go:117] "RemoveContainer" containerID="5fe6705eae1fbc89ec665967bdfcef33c4d35b3192b0d5e54fd47a92656d5772"
	Dec 26 21:56:56 addons-154736 kubelet[1365]: E1226 21:56:56.486780    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89"
	Dec 26 21:56:56 addons-154736 kubelet[1365]: E1226 21:56:56.487956    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="89cc57b4-3f60-4a69-b7d3-dbc25226b9c0"
	Dec 26 21:56:57 addons-154736 kubelet[1365]: E1226 21:56:57.487461    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="207a63c3-f7e0-4270-aa76-3681a7e5658c"
	
	
	==> storage-provisioner [2ca195417d20cd7d770bd0d4ca4ba2c4f87c603396ace0f89dc95113a10a3c0f] <==
	I1226 21:46:53.461435       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1226 21:46:53.482276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1226 21:46:53.482442       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1226 21:46:53.499286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1226 21:46:53.499560       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-154736_8b77c78b-51e5-498d-bad2-c4833c8e2aec!
	I1226 21:46:53.501777       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f84c5a7-4067-41c5-b8c4-76d4700df79e", APIVersion:"v1", ResourceVersion:"884", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-154736_8b77c78b-51e5-498d-bad2-c4833c8e2aec became leader
	I1226 21:46:53.600700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-154736_8b77c78b-51e5-498d-bad2-c4833c8e2aec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-154736 -n addons-154736
helpers_test.go:261: (dbg) Run:  kubectl --context addons-154736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod-restore ingress-nginx-admission-create-jtzt2 ingress-nginx-admission-patch-gwrdr
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-154736 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-jtzt2 ingress-nginx-admission-patch-gwrdr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-154736 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-jtzt2 ingress-nginx-admission-patch-gwrdr: exit status 1 (117.874829ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-154736/192.168.49.2
	Start Time:       Tue, 26 Dec 2023 21:48:59 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mttn4 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-mttn4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-154736
	  Warning  Failed     7m32s                  kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m30s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m40s (x4 over 8m3s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m10s (x4 over 7m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m10s (x2 over 6m2s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m45s (x7 over 7m32s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m45s (x7 over 7m32s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-154736/192.168.49.2
	Start Time:       Tue, 26 Dec 2023 21:49:19 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gjg5 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-6gjg5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  7m43s                  default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-154736
	  Warning  Failed     4m (x2 over 5m31s)     kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m19s (x4 over 7m42s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m25s (x2 over 6m32s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m25s (x4 over 6m32s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    105s (x7 over 6m32s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     105s (x7 over 6m32s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jtzt2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gwrdr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-154736 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-jtzt2 ingress-nginx-admission-patch-gwrdr: exit status 1
--- FAIL: TestAddons/parallel/Ingress (484.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (403.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 13.876768ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-154736 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-154736 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5242d5bf-535d-44a6-a04e-c85d958aa248] Pending
helpers_test.go:344: "task-pv-pod" [5242d5bf-535d-44a6-a04e-c85d958aa248] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5242d5bf-535d-44a6-a04e-c85d958aa248] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003333568s
addons_test.go:584: (dbg) Run:  kubectl --context addons-154736 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-154736 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-154736 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-154736 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-154736 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-154736 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-154736 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [207a63c3-f7e0-4270-aa76-3681a7e5658c] Pending
helpers_test.go:344: "task-pv-pod-restore" [207a63c3-f7e0-4270-aa76-3681a7e5658c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod-restore" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:621: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:621: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-154736 -n addons-154736
addons_test.go:621: TestAddons/parallel/CSI: showing logs for failed pods as of 2023-12-26 21:55:20.127985879 +0000 UTC m=+640.780960734
addons_test.go:621: (dbg) Run:  kubectl --context addons-154736 describe po task-pv-pod-restore -n default
addons_test.go:621: (dbg) kubectl --context addons-154736 describe po task-pv-pod-restore -n default:
Name:             task-pv-pod-restore
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-154736/192.168.49.2
Start Time:       Tue, 26 Dec 2023 21:49:19 +0000
Labels:           app=task-pv-pod-restore
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gjg5 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc-restore
ReadOnly:   false
kube-api-access-6gjg5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m1s                   default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-154736
Warning  Failed     2m18s (x2 over 3m49s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    97s (x4 over 6m)       kubelet            Pulling image "docker.io/nginx"
Warning  Failed     43s (x2 over 4m50s)    kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     43s (x4 over 4m50s)    kubelet            Error: ErrImagePull
Normal   BackOff    3s (x7 over 4m50s)     kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     3s (x7 over 4m50s)     kubelet            Error: ImagePullBackOff
addons_test.go:621: (dbg) Run:  kubectl --context addons-154736 logs task-pv-pod-restore -n default
addons_test.go:621: (dbg) Non-zero exit: kubectl --context addons-154736 logs task-pv-pod-restore -n default: exit status 1 (115.810582ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:621: kubectl --context addons-154736 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:622: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-154736
helpers_test.go:235: (dbg) docker inspect addons-154736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94",
	        "Created": "2023-12-26T21:45:41.806387804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 704120,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T21:45:42.123091502Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/hostname",
	        "HostsPath": "/var/lib/docker/containers/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/hosts",
	        "LogPath": "/var/lib/docker/containers/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94-json.log",
	        "Name": "/addons-154736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-154736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-154736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0eaaf8543365e6297970bf5096d74b7af77ea75fc0bb6e681d7f593d9e01e51-init/diff:/var/lib/docker/overlay2/45396a29879cab7c8a67d68e40c59b67c1c0ba964e9ed87a152af8cc5862c477/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0eaaf8543365e6297970bf5096d74b7af77ea75fc0bb6e681d7f593d9e01e51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0eaaf8543365e6297970bf5096d74b7af77ea75fc0bb6e681d7f593d9e01e51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0eaaf8543365e6297970bf5096d74b7af77ea75fc0bb6e681d7f593d9e01e51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-154736",
	                "Source": "/var/lib/docker/volumes/addons-154736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-154736",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-154736",
	                "name.minikube.sigs.k8s.io": "addons-154736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2c290e95bcf18514e9253c173e0261fcd2cebaf9efe8ca6024d46b1bc1ba866a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33671"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33670"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33667"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33669"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33668"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2c290e95bcf1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-154736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0927c77a91cb",
	                        "addons-154736"
	                    ],
	                    "NetworkID": "0ce741a8f930f44069a6bdf9f4ed33b0b28aabc7b6040abdd1f84433f7a93e9c",
	                    "EndpointID": "0c120efe77a5545a5dd5f788310b2f79bca21a0517ac182d7e7a20aa1f26e532",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-154736 -n addons-154736
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-154736 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-154736 logs -n 25: (1.677931878s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |                     |
	|         | -p download-only-988176                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |                     |
	|         | -p download-only-988176                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | -p download-only-988176                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| delete  | -p download-only-988176                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| delete  | -p download-only-988176                                                                     | download-only-988176   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| start   | --download-only -p                                                                          | download-docker-374836 | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | download-docker-374836                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-374836                                                                   | download-docker-374836 | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-438777   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | binary-mirror-438777                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45525                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-438777                                                                     | binary-mirror-438777   | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:45 UTC |
	| addons  | enable dashboard -p                                                                         | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | addons-154736                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |                     |
	|         | addons-154736                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-154736 --wait=true                                                                | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC | 26 Dec 23 21:48 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | -p addons-154736                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-154736 ip                                                                            | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	| addons  | addons-154736 addons disable                                                                | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | -p addons-154736                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-154736 ssh cat                                                                       | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | /opt/local-path-provisioner/pvc-e94447a0-cc9f-4ee2-b024-1e95c001aae0_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-154736 addons disable                                                                | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | addons-154736                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | addons-154736                                                                               |                        |         |         |                     |                     |
	| addons  | addons-154736 addons                                                                        | addons-154736          | jenkins | v1.32.0 | 26 Dec 23 21:48 UTC | 26 Dec 23 21:48 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:45:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:45:17.357121  703653 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:45:17.357260  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:45:17.357268  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:45:17.357273  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:45:17.357532  703653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 21:45:17.358030  703653 out.go:303] Setting JSON to false
	I1226 21:45:17.358813  703653 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19651,"bootTime":1703607466,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 21:45:17.358889  703653 start.go:138] virtualization:  
	I1226 21:45:17.361650  703653 out.go:177] * [addons-154736] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 21:45:17.364230  703653 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 21:45:17.365978  703653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:45:17.364366  703653 notify.go:220] Checking for updates...
	I1226 21:45:17.369777  703653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 21:45:17.371642  703653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 21:45:17.373457  703653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 21:45:17.375253  703653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 21:45:17.377723  703653 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:45:17.401923  703653 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:45:17.402036  703653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:45:17.480034  703653 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-26 21:45:17.470030553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:45:17.480160  703653 docker.go:295] overlay module found
	I1226 21:45:17.482441  703653 out.go:177] * Using the docker driver based on user configuration
	I1226 21:45:17.484480  703653 start.go:298] selected driver: docker
	I1226 21:45:17.484501  703653 start.go:902] validating driver "docker" against <nil>
	I1226 21:45:17.484556  703653 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 21:45:17.485187  703653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:45:17.559712  703653 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-26 21:45:17.550602015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:45:17.559868  703653 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 21:45:17.560121  703653 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 21:45:17.562089  703653 out.go:177] * Using Docker driver with root privileges
	I1226 21:45:17.564061  703653 cni.go:84] Creating CNI manager for ""
	I1226 21:45:17.564086  703653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:45:17.564098  703653 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 21:45:17.564115  703653 start_flags.go:323] config:
	{Name:addons-154736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-154736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:45:17.566567  703653 out.go:177] * Starting control plane node addons-154736 in cluster addons-154736
	I1226 21:45:17.568294  703653 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 21:45:17.570474  703653 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 21:45:17.572562  703653 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:45:17.572616  703653 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1226 21:45:17.572637  703653 cache.go:56] Caching tarball of preloaded images
	I1226 21:45:17.572720  703653 preload.go:174] Found /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1226 21:45:17.572729  703653 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1226 21:45:17.573075  703653 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/config.json ...
	I1226 21:45:17.573094  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/config.json: {Name:mk543582001de673a7ac0933815d446a06676405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:17.573254  703653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 21:45:17.589840  703653 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:45:17.589980  703653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 21:45:17.590005  703653 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I1226 21:45:17.590014  703653 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I1226 21:45:17.590022  703653 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I1226 21:45:17.590027  703653 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from local cache
	I1226 21:45:33.563146  703653 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c from cached tarball
	I1226 21:45:33.563189  703653 cache.go:194] Successfully downloaded all kic artifacts
	I1226 21:45:33.563259  703653 start.go:365] acquiring machines lock for addons-154736: {Name:mk2d6ec3bfe0e7c6048525ebd8a1df5b118807f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 21:45:33.563390  703653 start.go:369] acquired machines lock for "addons-154736" in 102.562µs
	I1226 21:45:33.563421  703653 start.go:93] Provisioning new machine with config: &{Name:addons-154736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-154736 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 21:45:33.563501  703653 start.go:125] createHost starting for "" (driver="docker")
	I1226 21:45:33.565694  703653 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1226 21:45:33.565980  703653 start.go:159] libmachine.API.Create for "addons-154736" (driver="docker")
	I1226 21:45:33.566014  703653 client.go:168] LocalClient.Create starting
	I1226 21:45:33.566134  703653 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem
	I1226 21:45:34.659916  703653 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem
	I1226 21:45:35.203057  703653 cli_runner.go:164] Run: docker network inspect addons-154736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 21:45:35.223431  703653 cli_runner.go:211] docker network inspect addons-154736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 21:45:35.223524  703653 network_create.go:281] running [docker network inspect addons-154736] to gather additional debugging logs...
	I1226 21:45:35.223560  703653 cli_runner.go:164] Run: docker network inspect addons-154736
	W1226 21:45:35.241707  703653 cli_runner.go:211] docker network inspect addons-154736 returned with exit code 1
	I1226 21:45:35.241741  703653 network_create.go:284] error running [docker network inspect addons-154736]: docker network inspect addons-154736: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-154736 not found
	I1226 21:45:35.241753  703653 network_create.go:286] output of [docker network inspect addons-154736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-154736 not found
	
	** /stderr **
	I1226 21:45:35.241866  703653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 21:45:35.259937  703653 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024e5ac0}
	I1226 21:45:35.259975  703653 network_create.go:124] attempt to create docker network addons-154736 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1226 21:45:35.260042  703653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-154736 addons-154736
	I1226 21:45:35.335366  703653 network_create.go:108] docker network addons-154736 192.168.49.0/24 created
	I1226 21:45:35.335400  703653 kic.go:121] calculated static IP "192.168.49.2" for the "addons-154736" container
	I1226 21:45:35.335480  703653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 21:45:35.352573  703653 cli_runner.go:164] Run: docker volume create addons-154736 --label name.minikube.sigs.k8s.io=addons-154736 --label created_by.minikube.sigs.k8s.io=true
	I1226 21:45:35.370827  703653 oci.go:103] Successfully created a docker volume addons-154736
	I1226 21:45:35.370918  703653 cli_runner.go:164] Run: docker run --rm --name addons-154736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-154736 --entrypoint /usr/bin/test -v addons-154736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 21:45:37.529102  703653 cli_runner.go:217] Completed: docker run --rm --name addons-154736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-154736 --entrypoint /usr/bin/test -v addons-154736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (2.158128411s)
	I1226 21:45:37.529133  703653 oci.go:107] Successfully prepared a docker volume addons-154736
	I1226 21:45:37.529167  703653 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:45:37.529191  703653 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 21:45:37.529267  703653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-154736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 21:45:41.723596  703653 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-154736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.194288032s)
	I1226 21:45:41.723631  703653 kic.go:203] duration metric: took 4.194436 seconds to extract preloaded images to volume
	W1226 21:45:41.723767  703653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 21:45:41.723910  703653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 21:45:41.790451  703653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-154736 --name addons-154736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-154736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-154736 --network addons-154736 --ip 192.168.49.2 --volume addons-154736:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 21:45:42.139875  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Running}}
	I1226 21:45:42.180243  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:45:42.207149  703653 cli_runner.go:164] Run: docker exec addons-154736 stat /var/lib/dpkg/alternatives/iptables
	I1226 21:45:42.298703  703653 oci.go:144] the created container "addons-154736" has a running status.
	I1226 21:45:42.298732  703653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa...
	I1226 21:45:43.584121  703653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 21:45:43.607127  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:45:43.625334  703653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 21:45:43.625358  703653 kic_runner.go:114] Args: [docker exec --privileged addons-154736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 21:45:43.682322  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:45:43.701947  703653 machine.go:88] provisioning docker machine ...
	I1226 21:45:43.701980  703653 ubuntu.go:169] provisioning hostname "addons-154736"
	I1226 21:45:43.702051  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:43.727271  703653 main.go:141] libmachine: Using SSH client type: native
	I1226 21:45:43.727709  703653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33671 <nil> <nil>}
	I1226 21:45:43.727730  703653 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-154736 && echo "addons-154736" | sudo tee /etc/hostname
	I1226 21:45:43.887415  703653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-154736
	
	I1226 21:45:43.887494  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:43.907302  703653 main.go:141] libmachine: Using SSH client type: native
	I1226 21:45:43.907710  703653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33671 <nil> <nil>}
	I1226 21:45:43.907728  703653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-154736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-154736/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-154736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 21:45:44.045974  703653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 21:45:44.046002  703653 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 21:45:44.046068  703653 ubuntu.go:177] setting up certificates
	I1226 21:45:44.046078  703653 provision.go:83] configureAuth start
	I1226 21:45:44.046159  703653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-154736
	I1226 21:45:44.065356  703653 provision.go:138] copyHostCerts
	I1226 21:45:44.065455  703653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 21:45:44.065605  703653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 21:45:44.065670  703653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 21:45:44.065751  703653 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.addons-154736 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-154736]
	I1226 21:45:44.682633  703653 provision.go:172] copyRemoteCerts
	I1226 21:45:44.682703  703653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 21:45:44.682742  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:44.700544  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:44.799116  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1226 21:45:44.827540  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 21:45:44.855809  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 21:45:44.884433  703653 provision.go:86] duration metric: configureAuth took 838.312591ms
	I1226 21:45:44.884503  703653 ubuntu.go:193] setting minikube options for container-runtime
	I1226 21:45:44.884750  703653 config.go:182] Loaded profile config "addons-154736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:45:44.884864  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:44.902598  703653 main.go:141] libmachine: Using SSH client type: native
	I1226 21:45:44.903006  703653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33671 <nil> <nil>}
	I1226 21:45:44.903028  703653 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 21:45:45.207361  703653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 21:45:45.207438  703653 machine.go:91] provisioned docker machine in 1.505466578s
	I1226 21:45:45.207468  703653 client.go:171] LocalClient.Create took 11.641443787s
	I1226 21:45:45.207516  703653 start.go:167] duration metric: libmachine.API.Create for "addons-154736" took 11.641516072s
	I1226 21:45:45.207545  703653 start.go:300] post-start starting for "addons-154736" (driver="docker")
	I1226 21:45:45.207576  703653 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 21:45:45.207691  703653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 21:45:45.207766  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:45.239347  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:45.349897  703653 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 21:45:45.355640  703653 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 21:45:45.355681  703653 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 21:45:45.355694  703653 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 21:45:45.355701  703653 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 21:45:45.355711  703653 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 21:45:45.355790  703653 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 21:45:45.355834  703653 start.go:303] post-start completed in 148.256524ms
	I1226 21:45:45.356161  703653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-154736
	I1226 21:45:45.377553  703653 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/config.json ...
	I1226 21:45:45.377843  703653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 21:45:45.377893  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:45.397872  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:45.494484  703653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 21:45:45.500291  703653 start.go:128] duration metric: createHost completed in 11.93677223s
	I1226 21:45:45.500316  703653 start.go:83] releasing machines lock for "addons-154736", held for 11.93691191s
	I1226 21:45:45.500400  703653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-154736
	I1226 21:45:45.518617  703653 ssh_runner.go:195] Run: cat /version.json
	I1226 21:45:45.518632  703653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 21:45:45.518672  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:45.518688  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:45:45.540730  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:45.541414  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:45:45.773361  703653 ssh_runner.go:195] Run: systemctl --version
	I1226 21:45:45.779480  703653 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 21:45:45.928374  703653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 21:45:45.934285  703653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 21:45:45.960009  703653 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 21:45:45.960108  703653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 21:45:46.007940  703653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 21:45:46.008022  703653 start.go:475] detecting cgroup driver to use...
	I1226 21:45:46.008095  703653 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 21:45:46.008197  703653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 21:45:46.027491  703653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 21:45:46.041920  703653 docker.go:203] disabling cri-docker service (if available) ...
	I1226 21:45:46.042015  703653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 21:45:46.058996  703653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 21:45:46.076168  703653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 21:45:46.176076  703653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 21:45:46.283099  703653 docker.go:219] disabling docker service ...
	I1226 21:45:46.283188  703653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 21:45:46.304709  703653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 21:45:46.318373  703653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 21:45:46.415364  703653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 21:45:46.525204  703653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 21:45:46.538401  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 21:45:46.558845  703653 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1226 21:45:46.558912  703653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:46.570719  703653 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 21:45:46.570843  703653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:46.582868  703653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:46.594679  703653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 21:45:46.607554  703653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 21:45:46.618987  703653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 21:45:46.629441  703653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 21:45:46.640009  703653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 21:45:46.737436  703653 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 21:45:46.868766  703653 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 21:45:46.868855  703653 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 21:45:46.873648  703653 start.go:543] Will wait 60s for crictl version
	I1226 21:45:46.873714  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:45:46.878246  703653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 21:45:46.923846  703653 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 21:45:46.923968  703653 ssh_runner.go:195] Run: crio --version
	I1226 21:45:46.972287  703653 ssh_runner.go:195] Run: crio --version
	I1226 21:45:47.024056  703653 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1226 21:45:47.026175  703653 cli_runner.go:164] Run: docker network inspect addons-154736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 21:45:47.043826  703653 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1226 21:45:47.048434  703653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:45:47.062328  703653 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:45:47.062400  703653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 21:45:47.129771  703653 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 21:45:47.129798  703653 crio.go:415] Images already preloaded, skipping extraction
	I1226 21:45:47.129855  703653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 21:45:47.170543  703653 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 21:45:47.170567  703653 cache_images.go:84] Images are preloaded, skipping loading
	I1226 21:45:47.170642  703653 ssh_runner.go:195] Run: crio config
	I1226 21:45:47.225638  703653 cni.go:84] Creating CNI manager for ""
	I1226 21:45:47.225661  703653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:45:47.225693  703653 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 21:45:47.225714  703653 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-154736 NodeName:addons-154736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 21:45:47.225856  703653 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-154736"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 21:45:47.225917  703653 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-154736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-154736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 21:45:47.225985  703653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 21:45:47.237056  703653 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 21:45:47.237133  703653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 21:45:47.247946  703653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1226 21:45:47.269840  703653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 21:45:47.291674  703653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1226 21:45:47.313342  703653 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1226 21:45:47.317861  703653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 21:45:47.331485  703653 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736 for IP: 192.168.49.2
	I1226 21:45:47.331518  703653 certs.go:190] acquiring lock for shared ca certs: {Name:mke6488a150c186a525017f74b8a69a9f5240d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:47.331655  703653 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key
	I1226 21:45:47.957856  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt ...
	I1226 21:45:47.957886  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt: {Name:mk47f0115b5b2e0f9fb3d82c3586bf65061aba13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:47.958103  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key ...
	I1226 21:45:47.958116  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key: {Name:mkf78405cdbf4f9984f2752ec84f5767189bbbb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:47.958203  703653 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key
	I1226 21:45:48.793651  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt ...
	I1226 21:45:48.793685  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt: {Name:mkfdb1f360b5d2e7d5f43ab0b751b43bd0785f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:48.793879  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key ...
	I1226 21:45:48.793891  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key: {Name:mkdcc5cfb23c652bc0a238c143809b638efe2934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:48.794001  703653 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.key
	I1226 21:45:48.794021  703653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt with IP's: []
	I1226 21:45:49.130100  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt ...
	I1226 21:45:49.130132  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: {Name:mk5b103a47afc40825354234830fdd6d328e23cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.130328  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.key ...
	I1226 21:45:49.130341  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.key: {Name:mk8109095c1de71d7b1e565af62dedeafa19e192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.130979  703653 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key.dd3b5fb2
	I1226 21:45:49.131003  703653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 21:45:49.342573  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt.dd3b5fb2 ...
	I1226 21:45:49.342606  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt.dd3b5fb2: {Name:mkbf4e612869b6431d860f33b33b959adfcdb9d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.342798  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key.dd3b5fb2 ...
	I1226 21:45:49.342811  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key.dd3b5fb2: {Name:mkd83808fc939bd68c9f62be01c7ab9dc98abd0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.342902  703653 certs.go:337] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt
	I1226 21:45:49.342981  703653 certs.go:341] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key
	I1226 21:45:49.343036  703653 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.key
	I1226 21:45:49.343051  703653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.crt with IP's: []
	I1226 21:45:49.729468  703653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.crt ...
	I1226 21:45:49.729499  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.crt: {Name:mk5f2f9075967085d62c91e8f08859c48d8fb037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.729683  703653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.key ...
	I1226 21:45:49.729697  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.key: {Name:mk6e3ac998613569553a1ffff7932a3627336a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:45:49.729906  703653 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem (1675 bytes)
	I1226 21:45:49.729951  703653 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem (1082 bytes)
	I1226 21:45:49.730001  703653 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem (1123 bytes)
	I1226 21:45:49.730030  703653 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem (1679 bytes)
	I1226 21:45:49.730627  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 21:45:49.761140  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1226 21:45:49.791177  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 21:45:49.820829  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1226 21:45:49.849585  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 21:45:49.878912  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 21:45:49.909268  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 21:45:49.941348  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 21:45:49.970836  703653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 21:45:49.999844  703653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 21:45:50.029395  703653 ssh_runner.go:195] Run: openssl version
	I1226 21:45:50.037542  703653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 21:45:50.050867  703653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:45:50.056091  703653 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:45:50.056176  703653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 21:45:50.065485  703653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 21:45:50.078013  703653 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 21:45:50.083315  703653 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 21:45:50.083437  703653 kubeadm.go:404] StartCluster: {Name:addons-154736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-154736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:45:50.083527  703653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 21:45:50.083590  703653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 21:45:50.144212  703653 cri.go:89] found id: ""
	I1226 21:45:50.144288  703653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 21:45:50.156065  703653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 21:45:50.167528  703653 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 21:45:50.167619  703653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 21:45:50.179275  703653 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 21:45:50.179343  703653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 21:45:50.239492  703653 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1226 21:45:50.239794  703653 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 21:45:50.287034  703653 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1226 21:45:50.287162  703653 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1226 21:45:50.287219  703653 kubeadm.go:322] OS: Linux
	I1226 21:45:50.287279  703653 kubeadm.go:322] CGROUPS_CPU: enabled
	I1226 21:45:50.287349  703653 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1226 21:45:50.287410  703653 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1226 21:45:50.287480  703653 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1226 21:45:50.287553  703653 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1226 21:45:50.287623  703653 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1226 21:45:50.287682  703653 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1226 21:45:50.287752  703653 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1226 21:45:50.287813  703653 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1226 21:45:50.370303  703653 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 21:45:50.370463  703653 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 21:45:50.370584  703653 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 21:45:50.630713  703653 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 21:45:50.635156  703653 out.go:204]   - Generating certificates and keys ...
	I1226 21:45:50.635285  703653 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 21:45:50.635356  703653 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 21:45:50.944392  703653 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 21:45:51.488209  703653 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 21:45:51.868223  703653 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 21:45:52.038881  703653 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 21:45:52.370275  703653 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 21:45:52.370434  703653 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-154736 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 21:45:52.635129  703653 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 21:45:52.635292  703653 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-154736 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 21:45:53.661223  703653 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 21:45:54.290814  703653 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 21:45:54.651197  703653 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 21:45:54.651394  703653 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 21:45:55.134969  703653 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 21:45:55.322944  703653 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 21:45:55.759311  703653 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 21:45:55.948385  703653 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 21:45:55.948895  703653 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 21:45:55.953472  703653 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 21:45:55.955844  703653 out.go:204]   - Booting up control plane ...
	I1226 21:45:55.955952  703653 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 21:45:55.956032  703653 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 21:45:55.957069  703653 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 21:45:55.967769  703653 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 21:45:55.968989  703653 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 21:45:55.969050  703653 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 21:45:56.074927  703653 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 21:46:03.077104  703653 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002309 seconds
	I1226 21:46:03.077227  703653 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 21:46:03.091854  703653 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 21:46:03.619549  703653 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 21:46:03.619738  703653 kubeadm.go:322] [mark-control-plane] Marking the node addons-154736 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 21:46:04.130928  703653 kubeadm.go:322] [bootstrap-token] Using token: 08smwl.lifk3a8mo3dqg185
	I1226 21:46:04.132945  703653 out.go:204]   - Configuring RBAC rules ...
	I1226 21:46:04.133066  703653 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 21:46:04.138717  703653 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 21:46:04.147006  703653 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 21:46:04.150888  703653 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 21:46:04.154871  703653 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 21:46:04.160999  703653 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 21:46:04.172410  703653 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 21:46:04.397107  703653 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 21:46:04.545271  703653 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 21:46:04.546497  703653 kubeadm.go:322] 
	I1226 21:46:04.546573  703653 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 21:46:04.546583  703653 kubeadm.go:322] 
	I1226 21:46:04.546657  703653 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 21:46:04.546666  703653 kubeadm.go:322] 
	I1226 21:46:04.546691  703653 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 21:46:04.546750  703653 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 21:46:04.546804  703653 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 21:46:04.546813  703653 kubeadm.go:322] 
	I1226 21:46:04.546864  703653 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1226 21:46:04.546872  703653 kubeadm.go:322] 
	I1226 21:46:04.546917  703653 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 21:46:04.546925  703653 kubeadm.go:322] 
	I1226 21:46:04.546975  703653 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 21:46:04.547049  703653 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 21:46:04.547137  703653 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 21:46:04.547146  703653 kubeadm.go:322] 
	I1226 21:46:04.547225  703653 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 21:46:04.547301  703653 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 21:46:04.547309  703653 kubeadm.go:322] 
	I1226 21:46:04.547388  703653 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 08smwl.lifk3a8mo3dqg185 \
	I1226 21:46:04.547489  703653 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 \
	I1226 21:46:04.547515  703653 kubeadm.go:322] 	--control-plane 
	I1226 21:46:04.547525  703653 kubeadm.go:322] 
	I1226 21:46:04.547605  703653 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 21:46:04.547613  703653 kubeadm.go:322] 
	I1226 21:46:04.547691  703653 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 08smwl.lifk3a8mo3dqg185 \
	I1226 21:46:04.547788  703653 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 
	I1226 21:46:04.550011  703653 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1226 21:46:04.550125  703653 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 21:46:04.550144  703653 cni.go:84] Creating CNI manager for ""
	I1226 21:46:04.550153  703653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:46:04.552156  703653 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 21:46:04.553984  703653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 21:46:04.565752  703653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 21:46:04.565773  703653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 21:46:04.619814  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 21:46:05.498806  703653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 21:46:05.498945  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:05.499035  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=addons-154736 minikube.k8s.io/updated_at=2023_12_26T21_46_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:05.656110  703653 ops.go:34] apiserver oom_adj: -16
	I1226 21:46:05.656189  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:06.157100  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:06.656948  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:07.157012  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:07.657245  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:08.156650  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:08.656541  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:09.157216  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:09.656418  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:10.156767  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:10.656764  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:11.156322  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:11.656675  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:12.156530  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:12.656297  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:13.156968  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:13.657245  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:14.156642  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:14.656931  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:15.156946  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:15.656910  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:16.157085  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:16.656471  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:17.156278  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:17.656294  703653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 21:46:17.768936  703653 kubeadm.go:1088] duration metric: took 12.270036305s to wait for elevateKubeSystemPrivileges.
	I1226 21:46:17.768962  703653 kubeadm.go:406] StartCluster complete in 27.685530957s
	I1226 21:46:17.768979  703653 settings.go:142] acquiring lock: {Name:mk1b89d623875ac96830001bdd0fc2b8d8c10aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:46:17.769094  703653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 21:46:17.769489  703653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/kubeconfig: {Name:mk171fc32e21f516abb68bc5ebeb628b3c1d7f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 21:46:17.770221  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 21:46:17.770499  703653 config.go:182] Loaded profile config "addons-154736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:46:17.770617  703653 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I1226 21:46:17.770691  703653 addons.go:69] Setting yakd=true in profile "addons-154736"
	I1226 21:46:17.770708  703653 addons.go:237] Setting addon yakd=true in "addons-154736"
	I1226 21:46:17.770740  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.771199  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.772718  703653 addons.go:69] Setting cloud-spanner=true in profile "addons-154736"
	I1226 21:46:17.772750  703653 addons.go:237] Setting addon cloud-spanner=true in "addons-154736"
	I1226 21:46:17.772735  703653 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-154736"
	I1226 21:46:17.772791  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.772832  703653 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-154736"
	I1226 21:46:17.772893  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.773203  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.773403  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.772728  703653 addons.go:69] Setting metrics-server=true in profile "addons-154736"
	I1226 21:46:17.773824  703653 addons.go:237] Setting addon metrics-server=true in "addons-154736"
	I1226 21:46:17.773873  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.774269  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.779281  703653 addons.go:69] Setting default-storageclass=true in profile "addons-154736"
	I1226 21:46:17.779314  703653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-154736"
	I1226 21:46:17.779643  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.792607  703653 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-154736"
	I1226 21:46:17.792651  703653 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-154736"
	I1226 21:46:17.792699  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.793150  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.804697  703653 addons.go:69] Setting gcp-auth=true in profile "addons-154736"
	I1226 21:46:17.804964  703653 mustload.go:65] Loading cluster: addons-154736
	I1226 21:46:17.805721  703653 addons.go:69] Setting registry=true in profile "addons-154736"
	I1226 21:46:17.805750  703653 addons.go:237] Setting addon registry=true in "addons-154736"
	I1226 21:46:17.805792  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.806207  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.806448  703653 config.go:182] Loaded profile config "addons-154736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 21:46:17.806745  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.828753  703653 addons.go:69] Setting storage-provisioner=true in profile "addons-154736"
	I1226 21:46:17.828787  703653 addons.go:237] Setting addon storage-provisioner=true in "addons-154736"
	I1226 21:46:17.828833  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.829266  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.804878  703653 addons.go:69] Setting ingress=true in profile "addons-154736"
	I1226 21:46:17.840652  703653 addons.go:237] Setting addon ingress=true in "addons-154736"
	I1226 21:46:17.840738  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.843946  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.851917  703653 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-154736"
	I1226 21:46:17.851955  703653 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-154736"
	I1226 21:46:17.852399  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.804889  703653 addons.go:69] Setting ingress-dns=true in profile "addons-154736"
	I1226 21:46:17.852637  703653 addons.go:237] Setting addon ingress-dns=true in "addons-154736"
	I1226 21:46:17.852715  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.853125  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.874589  703653 addons.go:69] Setting volumesnapshots=true in profile "addons-154736"
	I1226 21:46:17.874671  703653 addons.go:237] Setting addon volumesnapshots=true in "addons-154736"
	I1226 21:46:17.874749  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.878996  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:17.804896  703653 addons.go:69] Setting inspektor-gadget=true in profile "addons-154736"
	I1226 21:46:17.895266  703653 addons.go:237] Setting addon inspektor-gadget=true in "addons-154736"
	I1226 21:46:17.895344  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:17.895877  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:18.007568  703653 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I1226 21:46:18.026335  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1226 21:46:18.026501  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1226 21:46:18.026615  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.046125  703653 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I1226 21:46:18.026045  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:18.062757  703653 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I1226 21:46:18.066821  703653 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1226 21:46:18.064795  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1226 21:46:18.064803  703653 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1226 21:46:18.064807  703653 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1226 21:46:18.064821  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1226 21:46:18.067197  703653 addons.go:237] Setting addon default-storageclass=true in "addons-154736"
	I1226 21:46:18.073344  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:18.073857  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:18.076019  703653 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1226 21:46:18.076045  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1226 21:46:18.076108  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.074344  703653 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1226 21:46:18.076137  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1226 21:46:18.076187  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.104644  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1226 21:46:18.074535  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.094767  703653 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-154736"
	I1226 21:46:18.106659  703653 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1226 21:46:18.111864  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:18.113291  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1226 21:46:18.125892  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:18.126055  703653 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1226 21:46:18.126072  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1226 21:46:18.126126  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.125904  703653 out.go:177]   - Using image docker.io/registry:2.8.3
	I1226 21:46:18.156725  703653 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I1226 21:46:18.156754  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1226 21:46:18.156821  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.167971  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1226 21:46:18.172167  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1226 21:46:18.174496  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1226 21:46:18.176642  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1226 21:46:18.179637  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1226 21:46:18.182306  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1226 21:46:18.182372  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1226 21:46:18.182473  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.193072  703653 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1226 21:46:18.165581  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 21:46:18.196363  703653 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1226 21:46:18.196409  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1226 21:46:18.202329  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1226 21:46:18.202409  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.202582  703653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I1226 21:46:18.204409  703653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:46:18.202750  703653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 21:46:18.210421  703653 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1226 21:46:18.210452  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1226 21:46:18.210518  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.238353  703653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:46:18.240718  703653 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 21:46:18.240741  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1226 21:46:18.240812  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.259800  703653 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:46:18.259864  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 21:46:18.259969  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.284670  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.284999  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.322452  703653 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 21:46:18.322479  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 21:46:18.322539  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.383702  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.385611  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.397710  703653 out.go:177]   - Using image docker.io/busybox:stable
	I1226 21:46:18.399834  703653 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1226 21:46:18.402027  703653 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1226 21:46:18.402047  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1226 21:46:18.402115  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:18.418660  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.440650  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.469494  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.480016  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.481628  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.493844  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.529486  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.536621  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.560292  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:18.719345  703653 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-154736" context rescaled to 1 replicas
	I1226 21:46:18.719381  703653 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 21:46:18.721615  703653 out.go:177] * Verifying Kubernetes components...
	I1226 21:46:18.723683  703653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:46:18.823973  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1226 21:46:18.823993  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1226 21:46:18.843133  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1226 21:46:18.886287  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 21:46:18.890874  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 21:46:18.894516  703653 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I1226 21:46:18.894586  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1226 21:46:18.927177  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1226 21:46:18.970962  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1226 21:46:18.986951  703653 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1226 21:46:18.987022  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1226 21:46:19.021253  703653 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1226 21:46:19.021322  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1226 21:46:19.028642  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1226 21:46:19.028712  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1226 21:46:19.060764  703653 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1226 21:46:19.060835  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1226 21:46:19.064691  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1226 21:46:19.064761  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1226 21:46:19.067867  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 21:46:19.074108  703653 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1226 21:46:19.074185  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1226 21:46:19.149129  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1226 21:46:19.173549  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1226 21:46:19.173622  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1226 21:46:19.181331  703653 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1226 21:46:19.181402  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1226 21:46:19.204375  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1226 21:46:19.247997  703653 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1226 21:46:19.248067  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1226 21:46:19.252641  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1226 21:46:19.252714  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1226 21:46:19.257178  703653 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I1226 21:46:19.257247  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1226 21:46:19.375270  703653 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1226 21:46:19.375338  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1226 21:46:19.411800  703653 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1226 21:46:19.411874  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1226 21:46:19.474697  703653 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1226 21:46:19.474770  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1226 21:46:19.478562  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1226 21:46:19.478631  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1226 21:46:19.487788  703653 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1226 21:46:19.487864  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1226 21:46:19.607392  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1226 21:46:19.643328  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1226 21:46:19.662535  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1226 21:46:19.662607  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1226 21:46:19.682945  703653 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1226 21:46:19.683021  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1226 21:46:19.721905  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1226 21:46:19.721979  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1226 21:46:19.783208  703653 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:46:19.783276  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1226 21:46:19.813062  703653 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1226 21:46:19.813122  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1226 21:46:19.875998  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:46:19.892038  703653 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1226 21:46:19.892108  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1226 21:46:19.898813  703653 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I1226 21:46:19.898880  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1226 21:46:19.970604  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1226 21:46:19.970675  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1226 21:46:19.979157  703653 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1226 21:46:19.979228  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1226 21:46:20.068804  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1226 21:46:20.068877  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1226 21:46:20.071774  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1226 21:46:20.223830  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1226 21:46:20.223899  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1226 21:46:20.393097  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1226 21:46:20.393163  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1226 21:46:20.525129  703653 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1226 21:46:20.525203  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1226 21:46:20.653419  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1226 21:46:20.842620  703653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.646185588s)
	I1226 21:46:20.842696  703653 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1226 21:46:20.842737  703653 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.119032448s)
	I1226 21:46:20.843712  703653 node_ready.go:35] waiting up to 6m0s for node "addons-154736" to be "Ready" ...
	I1226 21:46:22.115492  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.272322247s)
	I1226 21:46:22.929290  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:23.873940  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.987579254s)
	I1226 21:46:23.874031  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.983094048s)
	I1226 21:46:23.874054  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.946818712s)
	I1226 21:46:23.899655  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.928558019s)
	W1226 21:46:23.935588  703653 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1226 21:46:24.549723  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.481784866s)
	I1226 21:46:24.550248  703653 addons.go:473] Verifying addon ingress=true in "addons-154736"
	I1226 21:46:24.550339  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.34541146s)
	I1226 21:46:24.550369  703653 addons.go:473] Verifying addon registry=true in "addons-154736"
	I1226 21:46:24.552781  703653 out.go:177] * Verifying ingress addon...
	I1226 21:46:24.555640  703653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1226 21:46:24.552845  703653 out.go:177] * Verifying registry addon...
	I1226 21:46:24.549972  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.90658226s)
	I1226 21:46:24.550052  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.673983728s)
	I1226 21:46:24.550108  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.478251658s)
	I1226 21:46:24.549829  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.400630337s)
	I1226 21:46:24.549929  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.942445076s)
	I1226 21:46:24.557896  703653 addons.go:473] Verifying addon metrics-server=true in "addons-154736"
	I1226 21:46:24.558715  703653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1226 21:46:24.560838  703653 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-154736 service yakd-dashboard -n yakd-dashboard
	
	
	W1226 21:46:24.559008  703653 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1226 21:46:24.562704  703653 retry.go:31] will retry after 132.401694ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1226 21:46:24.569656  703653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1226 21:46:24.569680  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:24.574284  703653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1226 21:46:24.574358  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:24.695895  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1226 21:46:24.956686  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.303171454s)
	I1226 21:46:24.956765  703653 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-154736"
	I1226 21:46:24.960028  703653 out.go:177] * Verifying csi-hostpath-driver addon...
	I1226 21:46:24.963091  703653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1226 21:46:24.975073  703653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1226 21:46:24.975149  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:25.071087  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:25.079722  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:25.350710  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:25.467870  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:25.561828  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:25.571223  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:25.970056  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:26.065018  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:26.068043  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:26.205366  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.509380595s)
	I1226 21:46:26.468713  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:26.570044  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:26.584924  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:26.924708  703653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1226 21:46:26.924807  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:26.956235  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:26.968334  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:27.062289  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:27.065379  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:27.146887  703653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1226 21:46:27.172562  703653 addons.go:237] Setting addon gcp-auth=true in "addons-154736"
	I1226 21:46:27.172661  703653 host.go:66] Checking if "addons-154736" exists ...
	I1226 21:46:27.173152  703653 cli_runner.go:164] Run: docker container inspect addons-154736 --format={{.State.Status}}
	I1226 21:46:27.207957  703653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1226 21:46:27.208012  703653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-154736
	I1226 21:46:27.246139  703653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33671 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/addons-154736/id_rsa Username:docker}
	I1226 21:46:27.365287  703653 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1226 21:46:27.367384  703653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1226 21:46:27.369310  703653 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1226 21:46:27.369365  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1226 21:46:27.429064  703653 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1226 21:46:27.429087  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1226 21:46:27.469042  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:27.489750  703653 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1226 21:46:27.489813  703653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1226 21:46:27.541813  703653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1226 21:46:27.562086  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:27.564702  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:27.847272  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:27.968383  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:28.062452  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:28.067190  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:28.469043  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:28.612064  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:28.613425  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:28.694641  703653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.152734381s)
	I1226 21:46:28.697723  703653 addons.go:473] Verifying addon gcp-auth=true in "addons-154736"
	I1226 21:46:28.699657  703653 out.go:177] * Verifying gcp-auth addon...
	I1226 21:46:28.702193  703653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1226 21:46:28.713134  703653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1226 21:46:28.713169  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:28.968083  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:29.062578  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:29.065752  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:29.206575  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:29.467801  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:29.562917  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:29.566551  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:29.706567  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:29.847539  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:29.969640  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:30.065196  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:30.065821  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:30.207650  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:30.468600  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:30.559955  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:30.563229  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:30.706713  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:30.967667  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:31.060215  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:31.063555  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:31.206014  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:31.467822  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:31.560684  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:31.564582  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:31.706453  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:31.848194  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:31.967625  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:32.071768  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:32.074574  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:32.206257  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:32.467742  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:32.560210  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:32.564370  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:32.706417  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:32.968484  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:33.059977  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:33.063079  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:33.206191  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:33.469654  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:33.560128  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:33.563933  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:33.706915  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:33.968138  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:34.060699  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:34.062966  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:34.205994  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:34.347763  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:34.467659  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:34.560545  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:34.564030  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:34.706956  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:34.967462  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:35.060365  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:35.063174  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:35.206350  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:35.467439  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:35.560949  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:35.563816  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:35.706116  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:35.967771  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:36.060270  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:36.062972  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:36.206749  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:36.347898  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:36.468162  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:36.562270  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:36.565356  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:36.706459  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:36.969062  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:37.060019  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:37.062458  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:37.205865  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:37.468108  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:37.560701  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:37.563249  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:37.706501  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:37.967330  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:38.061409  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:38.064071  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:38.206358  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:38.467649  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:38.560270  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:38.563682  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:38.706163  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:38.847114  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:38.968020  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:39.061302  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:39.062938  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:39.206046  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:39.468201  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:39.560687  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:39.563258  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:39.705944  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:39.967732  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:40.060511  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:40.063552  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:40.206079  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:40.467704  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:40.559890  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:40.562420  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:40.706020  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:40.847265  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:40.967615  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:41.059557  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:41.063115  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:41.206147  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:41.468171  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:41.561080  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:41.563399  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:41.705797  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:41.968183  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:42.060346  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:42.065795  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:42.206857  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:42.467804  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:42.559940  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:42.563378  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:42.706060  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:42.847516  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:42.967877  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:43.061125  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:43.063696  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:43.206173  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:43.468000  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:43.560123  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:43.564203  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:43.706221  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:43.967655  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:44.060179  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:44.063269  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:44.206390  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:44.468741  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:44.560372  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:44.562859  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:44.706366  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:44.847638  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:44.967909  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:45.061583  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:45.064665  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:45.211106  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:45.467461  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:45.560089  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:45.562275  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:45.706638  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:45.970355  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:46.059861  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:46.063618  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:46.206118  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:46.467955  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:46.560833  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:46.563482  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:46.706762  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:46.967932  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:47.060026  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:47.062487  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:47.205482  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:47.348084  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:47.476263  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:47.560890  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:47.564350  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:47.706703  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:47.969547  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:48.060729  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:48.063500  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:48.206175  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:48.468027  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:48.562588  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:48.565212  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:48.706331  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:48.968883  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:49.062420  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:49.067064  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:49.205752  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:49.468835  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:49.560426  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:49.566168  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:49.706795  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:49.847821  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:49.967554  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:50.060438  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:50.063784  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:50.206837  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:50.468711  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:50.568015  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:50.570291  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:50.706000  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:50.968506  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:51.060136  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:51.064069  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:51.205554  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:51.468398  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:51.562146  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:51.562949  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:51.706465  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:51.848005  703653 node_ready.go:58] node "addons-154736" has status "Ready":"False"
	I1226 21:46:51.967993  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:52.060615  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:52.063507  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:52.206665  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:52.489848  703653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1226 21:46:52.489871  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:52.570538  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:52.575568  703653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1226 21:46:52.575599  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:52.783948  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:52.858988  703653 node_ready.go:49] node "addons-154736" has status "Ready":"True"
	I1226 21:46:52.859014  703653 node_ready.go:38] duration metric: took 32.015238708s waiting for node "addons-154736" to be "Ready" ...
	I1226 21:46:52.859025  703653 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:46:52.873732  703653 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gbz9g" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:52.970570  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:53.062030  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:53.066822  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:53.208151  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:53.479541  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:53.561590  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:53.564565  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:53.706740  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:53.970560  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:54.062379  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:54.070050  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:54.206168  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:54.427418  703653 pod_ready.go:92] pod "coredns-5dd5756b68-gbz9g" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.427443  703653 pod_ready.go:81] duration metric: took 1.55363774s waiting for pod "coredns-5dd5756b68-gbz9g" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.427484  703653 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.441610  703653 pod_ready.go:92] pod "etcd-addons-154736" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.441633  703653 pod_ready.go:81] duration metric: took 14.134232ms waiting for pod "etcd-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.441673  703653 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.475840  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:54.482080  703653 pod_ready.go:92] pod "kube-apiserver-addons-154736" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.482105  703653 pod_ready.go:81] duration metric: took 40.41664ms waiting for pod "kube-apiserver-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.482143  703653 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.503272  703653 pod_ready.go:92] pod "kube-controller-manager-addons-154736" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.503299  703653 pod_ready.go:81] duration metric: took 21.139593ms waiting for pod "kube-controller-manager-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.503315  703653 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4r79z" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.510754  703653 pod_ready.go:92] pod "kube-proxy-4r79z" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.510779  703653 pod_ready.go:81] duration metric: took 7.429869ms waiting for pod "kube-proxy-4r79z" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.510791  703653 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.560768  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:54.569594  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:54.706488  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:54.849037  703653 pod_ready.go:92] pod "kube-scheduler-addons-154736" in "kube-system" namespace has status "Ready":"True"
	I1226 21:46:54.849064  703653 pod_ready.go:81] duration metric: took 338.264308ms waiting for pod "kube-scheduler-addons-154736" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.849077  703653 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace to be "Ready" ...
	I1226 21:46:54.969242  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:55.061146  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:55.067005  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:55.207290  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:55.470788  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:55.564909  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:55.570773  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:55.706289  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:55.976219  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:56.067117  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:56.070953  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:56.207571  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:56.471026  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:56.561105  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:56.565253  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:56.707338  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:56.856778  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:56.969325  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:57.062863  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:57.066909  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:57.206843  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:57.469685  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:57.560698  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:57.565874  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:57.706369  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:57.971579  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:58.067484  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:58.070866  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:58.206809  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:58.469998  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:58.560651  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:58.566168  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:58.705958  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:58.860185  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:46:58.976070  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:59.072436  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:59.074569  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:59.205965  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:59.475382  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:46:59.561133  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:46:59.563634  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:46:59.706415  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:46:59.969341  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:00.072603  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:00.073664  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:00.210007  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:00.471336  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:00.561063  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:00.570171  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:00.708851  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:00.970634  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:01.065099  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:01.069820  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:01.207701  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:01.377053  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:01.471027  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:01.561179  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:01.567931  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:01.707290  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:01.969516  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:02.062572  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:02.067470  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:02.206698  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:02.473756  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:02.562936  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:02.568913  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:02.714007  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:02.974098  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:03.060756  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:03.065070  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:03.207705  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:03.469972  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:03.566258  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:03.569802  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:03.707702  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:03.857703  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:03.974229  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:04.065539  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:04.068208  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:04.206996  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:04.470685  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:04.560769  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:04.573550  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:04.707771  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:04.970025  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:05.061588  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:05.077281  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:05.206268  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:05.470787  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:05.562031  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:05.565526  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:05.706928  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:05.974070  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:06.060856  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:06.065192  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:06.205889  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:06.357938  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:06.474136  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:06.561022  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:06.569230  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:06.706481  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:06.970270  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:07.062297  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:07.065987  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:07.206895  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:07.470017  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:07.560625  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:07.563890  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:07.706085  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:07.969070  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:08.062278  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:08.067967  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:08.206596  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:08.515424  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:08.566521  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:08.574832  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:08.713605  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:08.856198  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:08.969419  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:09.061431  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:09.067912  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:09.206807  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:09.469389  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:09.560738  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:09.565395  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:09.706554  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:09.972022  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:10.061966  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:10.065576  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:10.206308  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:10.469790  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:10.561193  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:10.564481  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:10.707575  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:10.857077  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:10.968937  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:11.061350  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:11.067239  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:11.206695  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:11.474233  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:11.561359  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:11.564561  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:11.706552  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:11.969464  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:12.061431  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:12.065839  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:12.205791  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:12.486311  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:12.561067  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:12.564620  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:12.708399  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:12.873072  703653 pod_ready.go:102] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:12.971333  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:13.094658  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:13.096399  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:13.208739  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:13.356187  703653 pod_ready.go:92] pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace has status "Ready":"True"
	I1226 21:47:13.356216  703653 pod_ready.go:81] duration metric: took 18.507131159s waiting for pod "metrics-server-7c66d45ddc-pz8ht" in "kube-system" namespace to be "Ready" ...
	I1226 21:47:13.356229  703653 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace to be "Ready" ...
	I1226 21:47:13.469532  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:13.561073  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:13.566388  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:13.706438  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:13.970252  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:14.061999  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:14.070026  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:14.207712  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:14.471132  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:14.567489  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:14.570427  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:14.707552  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:14.970629  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:15.062212  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:15.070838  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:15.207228  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:15.363858  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:15.472669  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:15.563808  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:15.568650  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:15.706794  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:15.969450  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:16.064243  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:16.065309  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:16.207222  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:16.473523  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:16.568060  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:16.569013  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:16.706515  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:16.973407  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:17.066957  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:17.072279  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:17.206427  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:17.364884  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:17.468992  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:17.567084  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:17.570624  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:17.706523  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:17.969537  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:18.060154  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:18.064503  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:18.222664  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:18.469733  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:18.571327  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:18.573773  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:18.709089  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:18.968889  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:19.069311  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:19.073568  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:19.207943  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:19.482855  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:19.575162  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:19.575483  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:19.705784  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:19.863199  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:19.970830  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:20.061874  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:20.065796  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:20.206850  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:20.473637  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:20.561375  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:20.565460  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:20.706454  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:20.968707  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:21.061491  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:21.065619  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:21.206526  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:21.469410  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:21.567855  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:21.568668  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:21.714494  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:21.902192  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:21.968787  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:22.061541  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:22.066095  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:22.206091  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:22.492888  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:22.562627  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:22.589676  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:22.718269  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:22.971014  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:23.060948  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:23.070832  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:23.206897  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:23.469906  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:23.566160  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:23.569918  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:23.707051  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:23.970360  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:24.061380  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:24.065995  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:24.207026  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:24.365402  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:24.469724  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:24.561066  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:24.565718  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:24.708651  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:24.974549  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:25.062214  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:25.065091  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:25.205842  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:25.475004  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:25.560266  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:25.564620  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:25.706207  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:25.969910  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:26.061138  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:26.064142  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:26.205918  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:26.469544  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:26.559990  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:26.564544  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:26.706927  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:26.872638  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:26.989402  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:27.061512  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:27.065575  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:27.206308  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:27.470363  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:27.565960  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:27.571508  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:27.709823  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:27.983523  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:28.064834  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:28.069810  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:28.206858  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:28.469981  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:28.561866  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:28.567347  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:28.706372  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:28.969840  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:29.060913  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:29.065572  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:29.216288  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:29.410195  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:29.469260  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:29.565518  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:29.567101  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:29.706298  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:29.969149  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:30.062598  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:30.071250  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:30.206184  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:30.469173  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:30.560713  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:30.563786  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1226 21:47:30.705967  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:30.970274  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:31.061099  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:31.064599  703653 kapi.go:107] duration metric: took 1m6.505878547s to wait for kubernetes.io/minikube-addons=registry ...
	I1226 21:47:31.206762  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:31.469586  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:31.561988  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:31.706767  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:31.864032  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:31.977033  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:32.061252  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:32.206052  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:32.469257  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:32.560678  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:32.706429  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:32.972790  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:33.060407  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:33.208205  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:33.478798  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:33.561123  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:33.713322  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:33.865637  703653 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"False"
	I1226 21:47:33.969626  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:34.068821  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:34.206745  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:34.468862  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:34.561874  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:34.706720  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:34.970112  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:35.061384  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:35.206359  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:35.470876  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:35.560734  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:35.707315  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:35.969191  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:36.061593  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:36.206118  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:36.366788  703653 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace has status "Ready":"True"
	I1226 21:47:36.366814  703653 pod_ready.go:81] duration metric: took 23.010576239s waiting for pod "nvidia-device-plugin-daemonset-9xfxt" in "kube-system" namespace to be "Ready" ...
	I1226 21:47:36.366838  703653 pod_ready.go:38] duration metric: took 43.50780045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 21:47:36.366858  703653 api_server.go:52] waiting for apiserver process to appear ...
	I1226 21:47:36.366895  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1226 21:47:36.366967  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1226 21:47:36.437260  703653 cri.go:89] found id: "c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:47:36.437371  703653 cri.go:89] found id: ""
	I1226 21:47:36.437394  703653 logs.go:284] 1 containers: [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8]
	I1226 21:47:36.437504  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.444644  703653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1226 21:47:36.444799  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1226 21:47:36.470712  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:36.568139  703653 cri.go:89] found id: "a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:47:36.568223  703653 cri.go:89] found id: ""
	I1226 21:47:36.568258  703653 logs.go:284] 1 containers: [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a]
	I1226 21:47:36.568371  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.574008  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:36.589803  703653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1226 21:47:36.589967  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1226 21:47:36.709373  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:36.777925  703653 cri.go:89] found id: "0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:47:36.777994  703653 cri.go:89] found id: ""
	I1226 21:47:36.778023  703653 logs.go:284] 1 containers: [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a]
	I1226 21:47:36.778115  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.786332  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1226 21:47:36.786481  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1226 21:47:36.865357  703653 cri.go:89] found id: "a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:47:36.865381  703653 cri.go:89] found id: ""
	I1226 21:47:36.865389  703653 logs.go:284] 1 containers: [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43]
	I1226 21:47:36.865446  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.871193  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1226 21:47:36.871273  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1226 21:47:36.939797  703653 cri.go:89] found id: "fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:47:36.939824  703653 cri.go:89] found id: ""
	I1226 21:47:36.939833  703653 logs.go:284] 1 containers: [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d]
	I1226 21:47:36.939889  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:36.950417  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1226 21:47:36.950495  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1226 21:47:36.971592  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:37.019700  703653 cri.go:89] found id: "5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:47:37.019732  703653 cri.go:89] found id: ""
	I1226 21:47:37.019741  703653 logs.go:284] 1 containers: [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee]
	I1226 21:47:37.019810  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:37.044883  703653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1226 21:47:37.044974  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1226 21:47:37.061353  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:37.108196  703653 cri.go:89] found id: "5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:47:37.108272  703653 cri.go:89] found id: ""
	I1226 21:47:37.108293  703653 logs.go:284] 1 containers: [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445]
	I1226 21:47:37.108380  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:37.114078  703653 logs.go:123] Gathering logs for kube-apiserver [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8] ...
	I1226 21:47:37.114151  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:47:37.195220  703653 logs.go:123] Gathering logs for etcd [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a] ...
	I1226 21:47:37.195296  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:47:37.218174  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:37.275411  703653 logs.go:123] Gathering logs for kube-scheduler [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43] ...
	I1226 21:47:37.275488  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:47:37.362244  703653 logs.go:123] Gathering logs for kube-proxy [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d] ...
	I1226 21:47:37.363123  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:47:37.424432  703653 logs.go:123] Gathering logs for container status ...
	I1226 21:47:37.424460  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1226 21:47:37.470454  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:37.495685  703653 logs.go:123] Gathering logs for kubelet ...
	I1226 21:47:37.495714  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1226 21:47:37.555728  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:18 addons-154736 kubelet[1365]: W1226 21:46:18.444235    1365 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.555951  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:18 addons-154736 kubelet[1365]: E1226 21:46:18.444308    1365 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:37.563330  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1226 21:47:37.577596  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.432342    1365 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.577823  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.432378    1365 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.578130  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.434011    1365 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.578343  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.579459  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.579665  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.581033  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:37.581221  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:37.618607  703653 logs.go:123] Gathering logs for describe nodes ...
	I1226 21:47:37.618646  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1226 21:47:37.707887  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:37.883827  703653 logs.go:123] Gathering logs for kube-controller-manager [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee] ...
	I1226 21:47:37.883860  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:47:37.970815  703653 logs.go:123] Gathering logs for kindnet [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445] ...
	I1226 21:47:37.970891  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:47:37.972427  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:38.083080  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:38.112927  703653 logs.go:123] Gathering logs for CRI-O ...
	I1226 21:47:38.112956  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1226 21:47:38.210766  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:38.227495  703653 logs.go:123] Gathering logs for dmesg ...
	I1226 21:47:38.227530  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1226 21:47:38.260459  703653 logs.go:123] Gathering logs for coredns [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a] ...
	I1226 21:47:38.260489  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:47:38.314608  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:47:38.314638  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1226 21:47:38.314684  703653 out.go:239] X Problems detected in kubelet:
	W1226 21:47:38.314697  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:38.314705  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:38.314715  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:38.314726  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:38.314733  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:38.314742  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:47:38.314748  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:47:38.469819  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:38.562425  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:38.705829  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:38.969304  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:39.072021  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:39.206688  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:39.470181  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:39.570020  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:39.707394  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:39.984670  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:40.065219  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:40.206859  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:40.469223  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:40.561358  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:40.706599  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:40.969985  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:41.061660  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:41.206767  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:41.470553  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:41.561113  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:41.705515  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:41.969382  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:42.061119  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:42.206342  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:42.470436  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:42.563875  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:42.706690  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:42.969264  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:43.065780  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:43.210316  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:43.470323  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:43.561501  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:43.706578  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:43.969896  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:44.062275  703653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 21:47:44.206718  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:44.472904  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:44.560083  703653 kapi.go:107] duration metric: took 1m20.004440781s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1226 21:47:44.705661  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:44.969639  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:45.209224  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:45.471716  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:45.707461  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:45.970535  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:46.206918  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1226 21:47:46.468572  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:46.710653  703653 kapi.go:107] duration metric: took 1m18.008457405s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1226 21:47:46.713128  703653 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-154736 cluster.
	I1226 21:47:46.715469  703653 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1226 21:47:46.717459  703653 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1226 21:47:46.968908  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:47.469921  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:47.970474  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:48.315876  703653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 21:47:48.333253  703653 api_server.go:72] duration metric: took 1m29.613843192s to wait for apiserver process to appear ...
	I1226 21:47:48.333327  703653 api_server.go:88] waiting for apiserver healthz status ...
	I1226 21:47:48.333374  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1226 21:47:48.333520  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1226 21:47:48.387443  703653 cri.go:89] found id: "c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:47:48.387505  703653 cri.go:89] found id: ""
	I1226 21:47:48.387528  703653 logs.go:284] 1 containers: [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8]
	I1226 21:47:48.387614  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.392971  703653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1226 21:47:48.393076  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1226 21:47:48.441764  703653 cri.go:89] found id: "a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:47:48.441787  703653 cri.go:89] found id: ""
	I1226 21:47:48.441795  703653 logs.go:284] 1 containers: [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a]
	I1226 21:47:48.441857  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.446560  703653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1226 21:47:48.446637  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1226 21:47:48.469918  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:48.497664  703653 cri.go:89] found id: "0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:47:48.497689  703653 cri.go:89] found id: ""
	I1226 21:47:48.497698  703653 logs.go:284] 1 containers: [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a]
	I1226 21:47:48.497770  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.503260  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1226 21:47:48.503397  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1226 21:47:48.549508  703653 cri.go:89] found id: "a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:47:48.549586  703653 cri.go:89] found id: ""
	I1226 21:47:48.549618  703653 logs.go:284] 1 containers: [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43]
	I1226 21:47:48.549705  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.561880  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1226 21:47:48.562111  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1226 21:47:48.610492  703653 cri.go:89] found id: "fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:47:48.610522  703653 cri.go:89] found id: ""
	I1226 21:47:48.610531  703653 logs.go:284] 1 containers: [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d]
	I1226 21:47:48.610598  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.615237  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1226 21:47:48.615369  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1226 21:47:48.659894  703653 cri.go:89] found id: "5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:47:48.659921  703653 cri.go:89] found id: ""
	I1226 21:47:48.659929  703653 logs.go:284] 1 containers: [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee]
	I1226 21:47:48.659986  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.664547  703653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1226 21:47:48.664625  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1226 21:47:48.716253  703653 cri.go:89] found id: "5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:47:48.716328  703653 cri.go:89] found id: ""
	I1226 21:47:48.716364  703653 logs.go:284] 1 containers: [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445]
	I1226 21:47:48.716458  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:47:48.721160  703653 logs.go:123] Gathering logs for kube-scheduler [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43] ...
	I1226 21:47:48.721186  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:47:48.796955  703653 logs.go:123] Gathering logs for kube-controller-manager [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee] ...
	I1226 21:47:48.796993  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:47:48.939771  703653 logs.go:123] Gathering logs for kindnet [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445] ...
	I1226 21:47:48.939808  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:47:48.973717  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:49.030173  703653 logs.go:123] Gathering logs for container status ...
	I1226 21:47:49.030203  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1226 21:47:49.125812  703653 logs.go:123] Gathering logs for dmesg ...
	I1226 21:47:49.125848  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1226 21:47:49.171784  703653 logs.go:123] Gathering logs for kube-apiserver [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8] ...
	I1226 21:47:49.171827  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:47:49.261006  703653 logs.go:123] Gathering logs for etcd [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a] ...
	I1226 21:47:49.261045  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:47:49.385558  703653 logs.go:123] Gathering logs for coredns [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a] ...
	I1226 21:47:49.385596  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:47:49.470012  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:49.494852  703653 logs.go:123] Gathering logs for kubelet ...
	I1226 21:47:49.494954  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1226 21:47:49.548364  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.432342    1365 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.549166  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.432378    1365 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.549563  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.434011    1365 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.549818  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.551333  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.551609  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.553338  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:49.553576  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:49.605108  703653 logs.go:123] Gathering logs for describe nodes ...
	I1226 21:47:49.605192  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1226 21:47:49.830170  703653 logs.go:123] Gathering logs for kube-proxy [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d] ...
	I1226 21:47:49.830254  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:47:49.887319  703653 logs.go:123] Gathering logs for CRI-O ...
	I1226 21:47:49.887347  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1226 21:47:49.972589  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:50.005922  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:47:50.005963  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1226 21:47:50.006045  703653 out.go:239] X Problems detected in kubelet:
	W1226 21:47:50.006055  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:47:50.006062  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:50.006073  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:47:50.006081  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:47:50.006087  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:47:50.006236  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:47:50.006245  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:47:50.469306  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:50.969161  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:51.469541  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:51.976072  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:52.468916  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:52.971505  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:53.471156  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:53.972400  703653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1226 21:47:54.470053  703653 kapi.go:107] duration metric: took 1m29.506960429s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1226 21:47:54.472743  703653 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, default-storageclass, metrics-server, inspektor-gadget, ingress-dns, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1226 21:47:54.474676  703653 addons.go:508] enable addons completed in 1m36.704070199s: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner default-storageclass metrics-server inspektor-gadget ingress-dns yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1226 21:48:00.012117  703653 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1226 21:48:00.054028  703653 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1226 21:48:00.105602  703653 api_server.go:141] control plane version: v1.28.4
	I1226 21:48:00.105631  703653 api_server.go:131] duration metric: took 11.77228382s to wait for apiserver health ...
	I1226 21:48:00.105641  703653 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 21:48:00.105664  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1226 21:48:00.105734  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1226 21:48:00.205318  703653 cri.go:89] found id: "c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:48:00.205391  703653 cri.go:89] found id: ""
	I1226 21:48:00.205415  703653 logs.go:284] 1 containers: [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8]
	I1226 21:48:00.205524  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.215039  703653 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1226 21:48:00.215144  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1226 21:48:00.324722  703653 cri.go:89] found id: "a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:48:00.324761  703653 cri.go:89] found id: ""
	I1226 21:48:00.324771  703653 logs.go:284] 1 containers: [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a]
	I1226 21:48:00.324844  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.334077  703653 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1226 21:48:00.334167  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1226 21:48:00.430622  703653 cri.go:89] found id: "0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:48:00.430677  703653 cri.go:89] found id: ""
	I1226 21:48:00.430686  703653 logs.go:284] 1 containers: [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a]
	I1226 21:48:00.430760  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.436429  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1226 21:48:00.436554  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1226 21:48:00.495777  703653 cri.go:89] found id: "a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:48:00.495803  703653 cri.go:89] found id: ""
	I1226 21:48:00.495812  703653 logs.go:284] 1 containers: [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43]
	I1226 21:48:00.495875  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.501338  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1226 21:48:00.501419  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1226 21:48:00.552860  703653 cri.go:89] found id: "fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:48:00.552885  703653 cri.go:89] found id: ""
	I1226 21:48:00.552895  703653 logs.go:284] 1 containers: [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d]
	I1226 21:48:00.552952  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.558338  703653 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1226 21:48:00.558413  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1226 21:48:00.612356  703653 cri.go:89] found id: "5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:48:00.612377  703653 cri.go:89] found id: ""
	I1226 21:48:00.612385  703653 logs.go:284] 1 containers: [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee]
	I1226 21:48:00.612449  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.619281  703653 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1226 21:48:00.619395  703653 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1226 21:48:00.662127  703653 cri.go:89] found id: "5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:48:00.662151  703653 cri.go:89] found id: ""
	I1226 21:48:00.662159  703653 logs.go:284] 1 containers: [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445]
	I1226 21:48:00.662229  703653 ssh_runner.go:195] Run: which crictl
	I1226 21:48:00.667001  703653 logs.go:123] Gathering logs for kubelet ...
	I1226 21:48:00.667025  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1226 21:48:00.702904  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.432342    1365 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.703173  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.432378    1365 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.703487  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.434011    1365 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.703673  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.705704  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.705960  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.707327  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:48:00.707535  703653 logs.go:138] Found kubelet problem: Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:48:00.754783  703653 logs.go:123] Gathering logs for describe nodes ...
	I1226 21:48:00.754810  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1226 21:48:00.898722  703653 logs.go:123] Gathering logs for coredns [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a] ...
	I1226 21:48:00.898807  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a"
	I1226 21:48:00.944475  703653 logs.go:123] Gathering logs for kube-scheduler [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43] ...
	I1226 21:48:00.944507  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43"
	I1226 21:48:00.998551  703653 logs.go:123] Gathering logs for kindnet [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445] ...
	I1226 21:48:00.998580  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445"
	I1226 21:48:01.044214  703653 logs.go:123] Gathering logs for CRI-O ...
	I1226 21:48:01.044242  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1226 21:48:01.140135  703653 logs.go:123] Gathering logs for container status ...
	I1226 21:48:01.140173  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1226 21:48:01.217748  703653 logs.go:123] Gathering logs for dmesg ...
	I1226 21:48:01.217781  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1226 21:48:01.240763  703653 logs.go:123] Gathering logs for kube-apiserver [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8] ...
	I1226 21:48:01.240795  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8"
	I1226 21:48:01.318123  703653 logs.go:123] Gathering logs for etcd [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a] ...
	I1226 21:48:01.318163  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a"
	I1226 21:48:01.403342  703653 logs.go:123] Gathering logs for kube-proxy [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d] ...
	I1226 21:48:01.403377  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d"
	I1226 21:48:01.445673  703653 logs.go:123] Gathering logs for kube-controller-manager [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee] ...
	I1226 21:48:01.445704  703653 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee"
	I1226 21:48:01.544040  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:48:01.544074  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1226 21:48:01.544133  703653 out.go:239] X Problems detected in kubelet:
	W1226 21:48:01.544145  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.434040    1365 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-154736' and this object
	W1226 21:48:01.544153  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.447831    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:48:01.544165  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.447865    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-154736" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-154736' and this object
	W1226 21:48:01.544171  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: W1226 21:46:52.510344    1365 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	W1226 21:48:01.544178  703653 out.go:239]   Dec 26 21:46:52 addons-154736 kubelet[1365]: E1226 21:46:52.510378    1365 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-154736" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-154736' and this object
	I1226 21:48:01.544187  703653 out.go:309] Setting ErrFile to fd 2...
	I1226 21:48:01.544193  703653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:48:11.554926  703653 system_pods.go:59] 18 kube-system pods found
	I1226 21:48:11.554969  703653 system_pods.go:61] "coredns-5dd5756b68-gbz9g" [7756995c-c766-475d-9528-a269947fb962] Running
	I1226 21:48:11.554976  703653 system_pods.go:61] "csi-hostpath-attacher-0" [27b8fdc0-b1a5-4537-90ed-94f695dc725c] Running
	I1226 21:48:11.554981  703653 system_pods.go:61] "csi-hostpath-resizer-0" [12956ad9-043f-423a-a709-e31bcd813e2c] Running
	I1226 21:48:11.554987  703653 system_pods.go:61] "csi-hostpathplugin-6v6w7" [16d70f46-43bf-4ddd-84fa-27b4cb888c4d] Running
	I1226 21:48:11.554993  703653 system_pods.go:61] "etcd-addons-154736" [547ecee7-8f0e-4964-9a05-a236594fe216] Running
	I1226 21:48:11.554998  703653 system_pods.go:61] "kindnet-5jgmg" [eca9c6b5-b0b8-4bdc-adf8-082992994bf6] Running
	I1226 21:48:11.555010  703653 system_pods.go:61] "kube-apiserver-addons-154736" [34c16ef5-ca23-4cb1-bec3-39f588dca777] Running
	I1226 21:48:11.555016  703653 system_pods.go:61] "kube-controller-manager-addons-154736" [b82dbbab-8430-449d-bdc0-1958eaf7e227] Running
	I1226 21:48:11.555028  703653 system_pods.go:61] "kube-ingress-dns-minikube" [e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1226 21:48:11.555034  703653 system_pods.go:61] "kube-proxy-4r79z" [4d99dd25-dcc5-4774-9ed5-ad626aabfced] Running
	I1226 21:48:11.555050  703653 system_pods.go:61] "kube-scheduler-addons-154736" [6a9cd5cd-d4ac-42d2-a4c7-e14e0a947899] Running
	I1226 21:48:11.555058  703653 system_pods.go:61] "metrics-server-7c66d45ddc-pz8ht" [ff2fdb32-af66-480d-ad25-175b65c5b1d4] Running
	I1226 21:48:11.555066  703653 system_pods.go:61] "nvidia-device-plugin-daemonset-9xfxt" [74fad637-1854-48ce-b606-8a09c28e7cfe] Running
	I1226 21:48:11.555071  703653 system_pods.go:61] "registry-g2w98" [21fa161c-0f99-4fb5-9573-259bd78d21a5] Running
	I1226 21:48:11.555078  703653 system_pods.go:61] "registry-proxy-h7qrg" [274f34a4-99a0-4df2-8e40-73229ad88336] Running
	I1226 21:48:11.555083  703653 system_pods.go:61] "snapshot-controller-58dbcc7b99-rtlzb" [b1add7d4-2504-43e0-83c8-40fc2c220da7] Running
	I1226 21:48:11.555088  703653 system_pods.go:61] "snapshot-controller-58dbcc7b99-wl4bb" [a7f38ca6-3848-4c5b-a7a3-b01da5e90140] Running
	I1226 21:48:11.555092  703653 system_pods.go:61] "storage-provisioner" [f0bcfc9d-7cd8-489e-9d2f-49edc5ce7b5d] Running
	I1226 21:48:11.555099  703653 system_pods.go:74] duration metric: took 11.449451529s to wait for pod list to return data ...
	I1226 21:48:11.555111  703653 default_sa.go:34] waiting for default service account to be created ...
	I1226 21:48:11.557677  703653 default_sa.go:45] found service account: "default"
	I1226 21:48:11.557702  703653 default_sa.go:55] duration metric: took 2.583966ms for default service account to be created ...
	I1226 21:48:11.557712  703653 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 21:48:11.567787  703653 system_pods.go:86] 18 kube-system pods found
	I1226 21:48:11.567826  703653 system_pods.go:89] "coredns-5dd5756b68-gbz9g" [7756995c-c766-475d-9528-a269947fb962] Running
	I1226 21:48:11.567834  703653 system_pods.go:89] "csi-hostpath-attacher-0" [27b8fdc0-b1a5-4537-90ed-94f695dc725c] Running
	I1226 21:48:11.567840  703653 system_pods.go:89] "csi-hostpath-resizer-0" [12956ad9-043f-423a-a709-e31bcd813e2c] Running
	I1226 21:48:11.567846  703653 system_pods.go:89] "csi-hostpathplugin-6v6w7" [16d70f46-43bf-4ddd-84fa-27b4cb888c4d] Running
	I1226 21:48:11.567851  703653 system_pods.go:89] "etcd-addons-154736" [547ecee7-8f0e-4964-9a05-a236594fe216] Running
	I1226 21:48:11.567856  703653 system_pods.go:89] "kindnet-5jgmg" [eca9c6b5-b0b8-4bdc-adf8-082992994bf6] Running
	I1226 21:48:11.567860  703653 system_pods.go:89] "kube-apiserver-addons-154736" [34c16ef5-ca23-4cb1-bec3-39f588dca777] Running
	I1226 21:48:11.567867  703653 system_pods.go:89] "kube-controller-manager-addons-154736" [b82dbbab-8430-449d-bdc0-1958eaf7e227] Running
	I1226 21:48:11.567876  703653 system_pods.go:89] "kube-ingress-dns-minikube" [e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1226 21:48:11.567890  703653 system_pods.go:89] "kube-proxy-4r79z" [4d99dd25-dcc5-4774-9ed5-ad626aabfced] Running
	I1226 21:48:11.567899  703653 system_pods.go:89] "kube-scheduler-addons-154736" [6a9cd5cd-d4ac-42d2-a4c7-e14e0a947899] Running
	I1226 21:48:11.567904  703653 system_pods.go:89] "metrics-server-7c66d45ddc-pz8ht" [ff2fdb32-af66-480d-ad25-175b65c5b1d4] Running
	I1226 21:48:11.567910  703653 system_pods.go:89] "nvidia-device-plugin-daemonset-9xfxt" [74fad637-1854-48ce-b606-8a09c28e7cfe] Running
	I1226 21:48:11.567917  703653 system_pods.go:89] "registry-g2w98" [21fa161c-0f99-4fb5-9573-259bd78d21a5] Running
	I1226 21:48:11.567922  703653 system_pods.go:89] "registry-proxy-h7qrg" [274f34a4-99a0-4df2-8e40-73229ad88336] Running
	I1226 21:48:11.567926  703653 system_pods.go:89] "snapshot-controller-58dbcc7b99-rtlzb" [b1add7d4-2504-43e0-83c8-40fc2c220da7] Running
	I1226 21:48:11.567931  703653 system_pods.go:89] "snapshot-controller-58dbcc7b99-wl4bb" [a7f38ca6-3848-4c5b-a7a3-b01da5e90140] Running
	I1226 21:48:11.567938  703653 system_pods.go:89] "storage-provisioner" [f0bcfc9d-7cd8-489e-9d2f-49edc5ce7b5d] Running
	I1226 21:48:11.567948  703653 system_pods.go:126] duration metric: took 10.227696ms to wait for k8s-apps to be running ...
	I1226 21:48:11.567960  703653 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 21:48:11.568025  703653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 21:48:11.582548  703653 system_svc.go:56] duration metric: took 14.580186ms WaitForService to wait for kubelet.
	I1226 21:48:11.582578  703653 kubeadm.go:581] duration metric: took 1m52.863172985s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 21:48:11.582598  703653 node_conditions.go:102] verifying NodePressure condition ...
	I1226 21:48:11.586209  703653 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1226 21:48:11.586240  703653 node_conditions.go:123] node cpu capacity is 2
	I1226 21:48:11.586252  703653 node_conditions.go:105] duration metric: took 3.649229ms to run NodePressure ...
	I1226 21:48:11.586264  703653 start.go:228] waiting for startup goroutines ...
	I1226 21:48:11.586271  703653 start.go:233] waiting for cluster config update ...
	I1226 21:48:11.586284  703653 start.go:242] writing updated cluster config ...
	I1226 21:48:11.586574  703653 ssh_runner.go:195] Run: rm -f paused
	I1226 21:48:11.914611  703653 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1226 21:48:11.917788  703653 out.go:177] * Done! kubectl is now configured to use "addons-154736" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 26 21:53:43 addons-154736 crio[897]: time="2023-12-26 21:53:43.487532624Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026 docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684],Size_:196113558,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=16e67bb6-6fbf-4776-bc2d-67c92fe5165b name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:53:52 addons-154736 crio[897]: time="2023-12-26 21:53:52.758704805Z" level=info msg="Pulling image: docker.io/nginx:latest" id=7dd9cd94-aa7d-437e-abf2-6faa8b56b5a3 name=/runtime.v1.ImageService/PullImage
	Dec 26 21:53:52 addons-154736 crio[897]: time="2023-12-26 21:53:52.760756828Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 26 21:54:06 addons-154736 crio[897]: time="2023-12-26 21:54:06.486984628Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e704ba1b-c62e-4221-af92-e4df8353b5af name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:06 addons-154736 crio[897]: time="2023-12-26 21:54:06.487201395Z" level=info msg="Image docker.io/nginx:alpine not found" id=e704ba1b-c62e-4221-af92-e4df8353b5af name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:17 addons-154736 crio[897]: time="2023-12-26 21:54:17.488097398Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=f9df838f-cb13-4622-958a-2ab56fd30a07 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:17 addons-154736 crio[897]: time="2023-12-26 21:54:17.488313352Z" level=info msg="Image docker.io/nginx:alpine not found" id=f9df838f-cb13-4622-958a-2ab56fd30a07 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:28 addons-154736 crio[897]: time="2023-12-26 21:54:28.487097872Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=6bdb7b09-1849-4bb8-8fe8-e51afa4d72d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:28 addons-154736 crio[897]: time="2023-12-26 21:54:28.488375684Z" level=info msg="Image docker.io/nginx:alpine not found" id=6bdb7b09-1849-4bb8-8fe8-e51afa4d72d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:41 addons-154736 crio[897]: time="2023-12-26 21:54:41.486597003Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e6d9ed5d-9835-468b-9a97-89908f9f1621 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:41 addons-154736 crio[897]: time="2023-12-26 21:54:41.486828530Z" level=info msg="Image docker.io/nginx:alpine not found" id=e6d9ed5d-9835-468b-9a97-89908f9f1621 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:51 addons-154736 crio[897]: time="2023-12-26 21:54:51.487578819Z" level=info msg="Checking image status: docker.io/nginx:latest" id=12f6f416-119b-41c3-ba72-b9464e19093f name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:51 addons-154736 crio[897]: time="2023-12-26 21:54:51.487853471Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026 docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684],Size_:196113558,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=12f6f416-119b-41c3-ba72-b9464e19093f name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:52 addons-154736 crio[897]: time="2023-12-26 21:54:52.486672703Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=01debe37-2c88-41c3-9649-cec2d88a53ca name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:54:52 addons-154736 crio[897]: time="2023-12-26 21:54:52.486965537Z" level=info msg="Image docker.io/nginx:alpine not found" id=01debe37-2c88-41c3-9649-cec2d88a53ca name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:03 addons-154736 crio[897]: time="2023-12-26 21:55:03.487143241Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=902100ef-ed7b-4b11-b3f9-8c2081f88b1d name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:03 addons-154736 crio[897]: time="2023-12-26 21:55:03.487387634Z" level=info msg="Image docker.io/nginx:alpine not found" id=902100ef-ed7b-4b11-b3f9-8c2081f88b1d name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:03 addons-154736 crio[897]: time="2023-12-26 21:55:03.487481974Z" level=info msg="Checking image status: docker.io/nginx:latest" id=b7dc9222-2082-4b22-b3b3-0af48eb1a400 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:03 addons-154736 crio[897]: time="2023-12-26 21:55:03.487626873Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026 docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684],Size_:196113558,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b7dc9222-2082-4b22-b3b3-0af48eb1a400 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:17 addons-154736 crio[897]: time="2023-12-26 21:55:17.487311218Z" level=info msg="Checking image status: docker.io/nginx:latest" id=82c8cc32-51dd-4087-9ae1-e38497591088 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:17 addons-154736 crio[897]: time="2023-12-26 21:55:17.487583006Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8aea65d81da202cf886d7766c7f2691bb9e363c6b5d9b1f5d9ddaaa4bc1e90c2,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026 docker.io/library/nginx@sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684],Size_:196113558,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=82c8cc32-51dd-4087-9ae1-e38497591088 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:18 addons-154736 crio[897]: time="2023-12-26 21:55:18.486186569Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=f7bdba2b-2092-4536-8063-cd1d2f8579f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:18 addons-154736 crio[897]: time="2023-12-26 21:55:18.486405855Z" level=info msg="Image docker.io/nginx:alpine not found" id=f7bdba2b-2092-4536-8063-cd1d2f8579f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 21:55:18 addons-154736 crio[897]: time="2023-12-26 21:55:18.487300521Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=2cc72c56-db56-4a69-b1ab-1dd3fe583efc name=/runtime.v1.ImageService/PullImage
	Dec 26 21:55:18 addons-154736 crio[897]: time="2023-12-26 21:55:18.489340885Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	5fe6705eae1fb       1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a                                                                             2 minutes ago       Exited              minikube-ingress-dns                     6                   01202b99ec6d0       kube-ingress-dns-minikube
	abb80e1df162f       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                                        7 minutes ago       Running             headlamp                                 0                   7ca1bd6e6c1a2       headlamp-7ddfbb94ff-qntlc
	d649a08406e0b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	a2e9c531dfae6       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          7 minutes ago       Running             csi-provisioner                          0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	c46e1aaf747f5       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            7 minutes ago       Running             liveness-probe                           0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	5736264be5277       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           7 minutes ago       Running             hostpath                                 0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	426fb5db606fa       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 7 minutes ago       Running             gcp-auth                                 0                   9bd4cf657b01b       gcp-auth-d4c87556c-c9kbx
	165279e6203ce       registry.k8s.io/ingress-nginx/controller@sha256:1ca66aa9f7f8fdecbecc88e4b89f0f4e7a1f1e952d0d5e52df2524e526259f6b                             7 minutes ago       Running             controller                               0                   4a0fbbb610f9c       ingress-nginx-controller-69cff4fd79-rqdlh
	c9f32bcae8b00       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                7 minutes ago       Running             node-driver-registrar                    0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	b684e7784c70e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   7 minutes ago       Exited              patch                                    0                   4ea53d7234716       ingress-nginx-admission-patch-gwrdr
	af22d1e4dbcc7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   8 minutes ago       Exited              create                                   0                   d92ec2169085e       ingress-nginx-admission-create-jtzt2
	1bc6c433ebb20       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   8 minutes ago       Running             csi-external-health-monitor-controller   0                   9824742ac2b4d       csi-hostpathplugin-6v6w7
	9ebfe5e4c3c95       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             8 minutes ago       Running             csi-attacher                             0                   6403bae626ef6       csi-hostpath-attacher-0
	de059b46043d6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              8 minutes ago       Running             csi-resizer                              0                   95e515ab13810       csi-hostpath-resizer-0
	7d0d63bb665e8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             8 minutes ago       Running             local-path-provisioner                   0                   3c9b34a725c2d       local-path-provisioner-78b46b4d5c-nwr5l
	ce1e295decd3a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      8 minutes ago       Running             volume-snapshot-controller               0                   01b7c76bc0451       snapshot-controller-58dbcc7b99-wl4bb
	8716dbfa389d1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      8 minutes ago       Running             volume-snapshot-controller               0                   d160ae65c8530       snapshot-controller-58dbcc7b99-rtlzb
	aea4da7d2eeb6       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              8 minutes ago       Running             yakd                                     0                   4f2b49cd1d1cc       yakd-dashboard-9947fc6bf-5ggjq
	2ca195417d20c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             8 minutes ago       Running             storage-provisioner                      0                   c3b7a1e4b36d7       storage-provisioner
	0b9784687fdf8       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             8 minutes ago       Running             coredns                                  0                   eb0ca42f98aa6       coredns-5dd5756b68-gbz9g
	fc7c1d4cc434f       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                                             9 minutes ago       Running             kube-proxy                               0                   ecfe1e6ef509b       kube-proxy-4r79z
	5f14597475dee       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                                             9 minutes ago       Running             kindnet-cni                              0                   9ab992efa85d9       kindnet-5jgmg
	c5b1b0ac08cda       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                                             9 minutes ago       Running             kube-apiserver                           0                   931df03571461       kube-apiserver-addons-154736
	5f4d17cd1a759       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                                             9 minutes ago       Running             kube-controller-manager                  0                   2627c20b7662e       kube-controller-manager-addons-154736
	a1a3df534703e       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                                             9 minutes ago       Running             kube-scheduler                           0                   ec5babc1f5d4e       kube-scheduler-addons-154736
	a00bf48419309       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                                             9 minutes ago       Running             etcd                                     0                   7d1113e33d739       etcd-addons-154736
	
	
	==> coredns [0b9784687fdf8249ef1817273d553c0e4539fdc99e2ed4e51e1e88341426888a] <==
	[INFO] 10.244.0.17:56311 - 15111 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002393906s
	[INFO] 10.244.0.17:51530 - 56241 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175454s
	[INFO] 10.244.0.17:51530 - 12478 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000167824s
	[INFO] 10.244.0.17:39191 - 1431 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152177s
	[INFO] 10.244.0.17:39191 - 26810 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000257677s
	[INFO] 10.244.0.17:48634 - 5977 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094922s
	[INFO] 10.244.0.17:48634 - 31128 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000201201s
	[INFO] 10.244.0.17:42129 - 3593 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112933s
	[INFO] 10.244.0.17:42129 - 27143 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000175676s
	[INFO] 10.244.0.17:38550 - 52237 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001772128s
	[INFO] 10.244.0.17:38550 - 50184 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001680774s
	[INFO] 10.244.0.17:43293 - 31812 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000078431s
	[INFO] 10.244.0.17:43293 - 9529 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118701s
	[INFO] 10.244.0.20:60042 - 54775 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264445s
	[INFO] 10.244.0.20:36194 - 51775 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017314s
	[INFO] 10.244.0.20:34311 - 42203 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161137s
	[INFO] 10.244.0.20:47094 - 4881 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011441s
	[INFO] 10.244.0.20:40589 - 5895 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000281774s
	[INFO] 10.244.0.20:42712 - 53585 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000328559s
	[INFO] 10.244.0.20:45825 - 51530 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002334691s
	[INFO] 10.244.0.20:60259 - 40603 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002397369s
	[INFO] 10.244.0.20:47110 - 43627 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000740267s
	[INFO] 10.244.0.20:50917 - 58925 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.004757339s
	[INFO] 10.244.0.22:47955 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000184398s
	[INFO] 10.244.0.22:42660 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128588s
	
	
	==> describe nodes <==
	Name:               addons-154736
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-154736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=addons-154736
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T21_46_05_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-154736
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-154736"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 21:46:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-154736
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 21:55:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 21:54:13 +0000   Tue, 26 Dec 2023 21:45:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 21:54:13 +0000   Tue, 26 Dec 2023 21:45:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 21:54:13 +0000   Tue, 26 Dec 2023 21:45:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 21:54:13 +0000   Tue, 26 Dec 2023 21:46:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-154736
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfc7b39add434585b250c10345c20f17
	  System UUID:                04713493-cfac-4455-8894-dae1076e6bc4
	  Boot ID:                    f8f887b2-8c20-433d-a967-90e814370f09
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  gcp-auth                    gcp-auth-d4c87556c-c9kbx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m53s
	  headlamp                    headlamp-7ddfbb94ff-qntlc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-rqdlh    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         8m57s
	  kube-system                 coredns-5dd5756b68-gbz9g                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	  kube-system                 csi-hostpathplugin-6v6w7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 etcd-addons-154736                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kindnet-5jgmg                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m3s
	  kube-system                 kube-apiserver-addons-154736                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-addons-154736        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 kube-proxy-4r79z                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 kube-scheduler-addons-154736                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 snapshot-controller-58dbcc7b99-rtlzb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	  kube-system                 snapshot-controller-58dbcc7b99-wl4bb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m57s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  local-path-storage          local-path-provisioner-78b46b4d5c-nwr5l      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-5ggjq               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     8m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             438Mi (5%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m57s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node addons-154736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node addons-154736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node addons-154736 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m4s   node-controller  Node addons-154736 event: Registered Node addons-154736 in Controller
	  Normal  NodeReady                8m29s  kubelet          Node addons-154736 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001114] FS-Cache: O-key=[8] '635f3b0000000000'
	[  +0.000763] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001031] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=000000000db3c1b7
	[  +0.001157] FS-Cache: N-key=[8] '635f3b0000000000'
	[  +0.002874] FS-Cache: Duplicate cookie detected
	[  +0.000764] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001117] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=000000007ac7c815
	[  +0.001084] FS-Cache: O-key=[8] '635f3b0000000000'
	[  +0.000742] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001038] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000328509c1
	[  +0.001125] FS-Cache: N-key=[8] '635f3b0000000000'
	[  +2.220713] FS-Cache: Duplicate cookie detected
	[  +0.000749] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001122] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=00000000ebeba0e0
	[  +0.001200] FS-Cache: O-key=[8] '615f3b0000000000'
	[  +0.000765] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=000000008353ea7f
	[  +0.001072] FS-Cache: N-key=[8] '615f3b0000000000'
	[  +0.309997] FS-Cache: Duplicate cookie detected
	[  +0.000749] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001114] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=00000000e02b88cc
	[  +0.001198] FS-Cache: O-key=[8] '695f3b0000000000'
	[  +0.000739] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001020] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=000000000db3c1b7
	[  +0.001131] FS-Cache: N-key=[8] '695f3b0000000000'
	
	
	==> etcd [a00bf48419309758b7753801eeea631f219e76d965d7b323f5e9abc71c187c1a] <==
	{"level":"warn","ts":"2023-12-26T21:46:19.422447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.411853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:46:19.423272Z","caller":"traceutil/trace.go:171","msg":"trace[751642578] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:340; }","duration":"150.325784ms","start":"2023-12-26T21:46:19.27293Z","end":"2023-12-26T21:46:19.423256Z","steps":["trace[751642578] 'agreement among raft nodes before linearized reading'  (duration: 149.376071ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:19.434917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.18153ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-public/\" range_end:\"/registry/serviceaccounts/kube-public0\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2023-12-26T21:46:19.449152Z","caller":"traceutil/trace.go:171","msg":"trace[612606180] range","detail":"{range_begin:/registry/serviceaccounts/kube-public/; range_end:/registry/serviceaccounts/kube-public0; response_count:1; response_revision:340; }","duration":"167.415706ms","start":"2023-12-26T21:46:19.281716Z","end":"2023-12-26T21:46:19.449131Z","steps":["trace[612606180] 'agreement among raft nodes before linearized reading'  (duration: 153.1419ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:19.435203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.620929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2023-12-26T21:46:19.450364Z","caller":"traceutil/trace.go:171","msg":"trace[1514450604] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:340; }","duration":"168.7533ms","start":"2023-12-26T21:46:19.281574Z","end":"2023-12-26T21:46:19.450327Z","steps":["trace[1514450604] 'agreement among raft nodes before linearized reading'  (duration: 153.599177ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:19.435236Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.87264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-12-26T21:46:19.450684Z","caller":"traceutil/trace.go:171","msg":"trace[844122797] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:340; }","duration":"169.314879ms","start":"2023-12-26T21:46:19.28136Z","end":"2023-12-26T21:46:19.450675Z","steps":["trace[844122797] 'agreement among raft nodes before linearized reading'  (duration: 153.860891ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:22.668146Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.19351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-154736\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2023-12-26T21:46:22.673597Z","caller":"traceutil/trace.go:171","msg":"trace[511291353] range","detail":"{range_begin:/registry/minions/addons-154736; range_end:; response_count:1; response_revision:378; }","duration":"237.638785ms","start":"2023-12-26T21:46:22.435901Z","end":"2023-12-26T21:46:22.673539Z","steps":["trace[511291353] 'agreement among raft nodes before linearized reading'  (duration: 36.731458ms)","trace[511291353] 'range keys from in-memory index tree'  (duration: 195.457558ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:46:22.673853Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.04778ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128026081072486522 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:363 > success:<request_put:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" value_size:3174 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-12-26T21:46:22.673947Z","caller":"traceutil/trace.go:171","msg":"trace[1935977781] linearizableReadLoop","detail":"{readStateIndex:388; appliedIndex:387; }","duration":"201.322071ms","start":"2023-12-26T21:46:22.472609Z","end":"2023-12-26T21:46:22.673932Z","steps":["trace[1935977781] 'read index received'  (duration: 419.797µs)","trace[1935977781] 'applied index is now lower than readState.Index'  (duration: 200.901019ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-26T21:46:22.674917Z","caller":"traceutil/trace.go:171","msg":"trace[231094946] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"238.601307ms","start":"2023-12-26T21:46:22.436303Z","end":"2023-12-26T21:46:22.674904Z","steps":["trace[231094946] 'process raft request'  (duration: 36.42707ms)","trace[231094946] 'compare'  (duration: 194.925385ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:46:22.679418Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.401262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/local-path-storage/\" range_end:\"/registry/resourcequotas/local-path-storage0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:46:22.685778Z","caller":"traceutil/trace.go:171","msg":"trace[1635112425] range","detail":"{range_begin:/registry/resourcequotas/local-path-storage/; range_end:/registry/resourcequotas/local-path-storage0; response_count:0; response_revision:379; }","duration":"249.765309ms","start":"2023-12-26T21:46:22.435992Z","end":"2023-12-26T21:46:22.685757Z","steps":["trace[1635112425] 'agreement among raft nodes before linearized reading'  (duration: 238.070382ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:22.687437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.139407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:46:22.692198Z","caller":"traceutil/trace.go:171","msg":"trace[1516003958] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:379; }","duration":"255.897058ms","start":"2023-12-26T21:46:22.43628Z","end":"2023-12-26T21:46:22.692177Z","steps":["trace[1516003958] 'agreement among raft nodes before linearized reading'  (duration: 237.777523ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:22.693456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.419763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/yakd-dashboard/\" range_end:\"/registry/resourcequotas/yakd-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-26T21:46:22.737829Z","caller":"traceutil/trace.go:171","msg":"trace[1066999797] range","detail":"{range_begin:/registry/resourcequotas/yakd-dashboard/; range_end:/registry/resourcequotas/yakd-dashboard0; response_count:0; response_revision:382; }","duration":"301.7929ms","start":"2023-12-26T21:46:22.436018Z","end":"2023-12-26T21:46:22.737811Z","steps":["trace[1066999797] 'agreement among raft nodes before linearized reading'  (duration: 238.041861ms)","trace[1066999797] 'get authentication metadata'  (duration: 19.369681ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:46:22.737927Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:46:22.436015Z","time spent":"301.894427ms","remote":"127.0.0.1:36046","response type":"/etcdserverpb.KV/Range","request count":0,"request size":84,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas/yakd-dashboard/\" range_end:\"/registry/resourcequotas/yakd-dashboard0\" "}
	{"level":"warn","ts":"2023-12-26T21:46:22.69355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.58278ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-12-26T21:46:22.738143Z","caller":"traceutil/trace.go:171","msg":"trace[1020394058] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:382; }","duration":"302.170483ms","start":"2023-12-26T21:46:22.435963Z","end":"2023-12-26T21:46:22.738133Z","steps":["trace[1020394058] 'agreement among raft nodes before linearized reading'  (duration: 238.109658ms)","trace[1020394058] 'get authentication metadata'  (duration: 19.456522ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-26T21:46:22.738223Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:46:22.435959Z","time spent":"302.253221ms","remote":"127.0.0.1:36074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":375,"request content":"key:\"/registry/namespaces/kube-system\" "}
	{"level":"info","ts":"2023-12-26T21:46:22.687855Z","caller":"traceutil/trace.go:171","msg":"trace[1234581384] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"164.055398ms","start":"2023-12-26T21:46:22.523789Z","end":"2023-12-26T21:46:22.687845Z","steps":["trace[1234581384] 'process raft request'  (duration: 162.503228ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-26T21:46:22.736988Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-26T21:46:22.436272Z","time spent":"300.69471ms","remote":"127.0.0.1:36104","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" "}
	
	
	==> gcp-auth [426fb5db606fa75ce6945ac23ec91db80974534187f20b743958f83105dfbf01] <==
	2023/12/26 21:47:46 GCP Auth Webhook started!
	2023/12/26 21:48:13 Ready to marshal response ...
	2023/12/26 21:48:13 Ready to write response ...
	2023/12/26 21:48:13 Ready to marshal response ...
	2023/12/26 21:48:13 Ready to write response ...
	2023/12/26 21:48:13 Ready to marshal response ...
	2023/12/26 21:48:13 Ready to write response ...
	2023/12/26 21:48:24 Ready to marshal response ...
	2023/12/26 21:48:24 Ready to write response ...
	2023/12/26 21:48:30 Ready to marshal response ...
	2023/12/26 21:48:30 Ready to write response ...
	2023/12/26 21:48:30 Ready to marshal response ...
	2023/12/26 21:48:30 Ready to write response ...
	2023/12/26 21:48:39 Ready to marshal response ...
	2023/12/26 21:48:39 Ready to write response ...
	2023/12/26 21:48:44 Ready to marshal response ...
	2023/12/26 21:48:44 Ready to write response ...
	2023/12/26 21:48:59 Ready to marshal response ...
	2023/12/26 21:48:59 Ready to write response ...
	2023/12/26 21:49:19 Ready to marshal response ...
	2023/12/26 21:49:19 Ready to write response ...
	
	
	==> kernel <==
	 21:55:22 up  5:37,  0 users,  load average: 0.16, 0.69, 1.33
	Linux addons-154736 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [5f14597475deec6687f7cf3ad4f366e454ec133bf87236d0dd00a321bff92445] <==
	I1226 21:53:12.249596       1 main.go:227] handling current node
	I1226 21:53:22.262218       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:53:22.262252       1 main.go:227] handling current node
	I1226 21:53:32.266639       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:53:32.266670       1 main.go:227] handling current node
	I1226 21:53:42.277721       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:53:42.277758       1 main.go:227] handling current node
	I1226 21:53:52.288362       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:53:52.288501       1 main.go:227] handling current node
	I1226 21:54:02.292683       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:54:02.292708       1 main.go:227] handling current node
	I1226 21:54:12.305033       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:54:12.305060       1 main.go:227] handling current node
	I1226 21:54:22.309597       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:54:22.309629       1 main.go:227] handling current node
	I1226 21:54:32.321920       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:54:32.321951       1 main.go:227] handling current node
	I1226 21:54:42.326881       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:54:42.326914       1 main.go:227] handling current node
	I1226 21:54:52.339163       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:54:52.339188       1 main.go:227] handling current node
	I1226 21:55:02.348797       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:55:02.348825       1 main.go:227] handling current node
	I1226 21:55:12.357795       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 21:55:12.357824       1 main.go:227] handling current node
	
	
	==> kube-apiserver [c5b1b0ac08cda61e9bbefc778ff2c9efec379f6aaf465452284a7a96531982d8] <==
	E1226 21:47:14.175767       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1226 21:47:14.176876       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1226 21:47:18.183142       1 handler_proxy.go:93] no RequestInfo found in the context
	E1226 21:47:18.183194       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1226 21:47:18.183278       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.226.217:443/apis/metrics.k8s.io/v1beta1: Get "https://10.101.226.217:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
	I1226 21:47:18.226035       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1226 21:47:18.235616       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I1226 21:47:18.249146       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1226 21:48:01.090044       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1226 21:48:13.391328       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.241.164"}
	I1226 21:48:47.250109       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1226 21:48:47.272806       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1226 21:48:48.323602       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1226 21:48:57.771535       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1226 21:48:58.917884       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1226 21:48:59.228975       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.161.33"}
	I1226 21:49:19.227715       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1226 21:51:01.378469       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:51:01.378621       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:51:01.378765       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:51:01.378829       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1226 21:51:01.378947       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1226 21:51:01.379002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [5f4d17cd1a7591fad519abf838f38f74aa1d9ab2dbda0b46ce722b54400e80ee] <==
	I1226 21:49:02.278814       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1226 21:49:05.724978       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:49:05.725011       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1226 21:49:17.280093       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1226 21:49:19.442102       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1226 21:49:22.322988       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:49:22.323018       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:50:08.035478       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:50:08.035517       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:50:43.390905       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:50:43.390938       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:51:20.739347       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:51:20.739382       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:51:55.552410       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:51:55.552446       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:52:31.808159       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:52:31.808199       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:53:02.112640       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:53:02.112676       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:53:35.339875       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:53:35.339909       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:54:32.830341       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:54:32.830378       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1226 21:55:17.216364       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1226 21:55:17.216397       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [fc7c1d4cc434f0b477649382600f28a8b97ca79c8a08eb97ab1947e7523f584d] <==
	I1226 21:46:23.640742       1 server_others.go:69] "Using iptables proxy"
	I1226 21:46:23.780851       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1226 21:46:23.961749       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1226 21:46:23.964285       1 server_others.go:152] "Using iptables Proxier"
	I1226 21:46:23.964388       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1226 21:46:23.964453       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1226 21:46:23.964548       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 21:46:23.964814       1 server.go:846] "Version info" version="v1.28.4"
	I1226 21:46:23.965035       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 21:46:23.965823       1 config.go:188] "Starting service config controller"
	I1226 21:46:23.966279       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 21:46:23.966348       1 config.go:97] "Starting endpoint slice config controller"
	I1226 21:46:23.966380       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 21:46:23.966960       1 config.go:315] "Starting node config controller"
	I1226 21:46:23.967020       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 21:46:24.068617       1 shared_informer.go:318] Caches are synced for node config
	I1226 21:46:24.068805       1 shared_informer.go:318] Caches are synced for service config
	I1226 21:46:24.068877       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a1a3df534703e2bb88745f494a4f27e49f98cc68c5747f8b75f69df096105d43] <==
	W1226 21:46:01.499821       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1226 21:46:01.499861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1226 21:46:01.499951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1226 21:46:01.499994       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1226 21:46:01.500095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1226 21:46:01.500137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1226 21:46:01.500229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1226 21:46:01.500269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1226 21:46:01.500416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1226 21:46:01.500460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1226 21:46:01.500581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 21:46:01.500641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1226 21:46:01.500744       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1226 21:46:01.500787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1226 21:46:01.500885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 21:46:01.500922       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1226 21:46:01.501014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 21:46:01.501056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1226 21:46:01.501147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 21:46:01.501185       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1226 21:46:01.501259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1226 21:46:01.501297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1226 21:46:02.462665       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 21:46:02.462706       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1226 21:46:05.170353       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 26 21:54:51 addons-154736 kubelet[1365]: I1226 21:54:51.486045    1365 scope.go:117] "RemoveContainer" containerID="5fe6705eae1fbc89ec665967bdfcef33c4d35b3192b0d5e54fd47a92656d5772"
	Dec 26 21:54:51 addons-154736 kubelet[1365]: E1226 21:54:51.486364    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89"
	Dec 26 21:54:51 addons-154736 kubelet[1365]: E1226 21:54:51.488261    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="207a63c3-f7e0-4270-aa76-3681a7e5658c"
	Dec 26 21:54:52 addons-154736 kubelet[1365]: E1226 21:54:52.487289    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="89cc57b4-3f60-4a69-b7d3-dbc25226b9c0"
	Dec 26 21:54:53 addons-154736 kubelet[1365]: E1226 21:54:53.412858    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a61ae8a41b5ee6689d25072d91db1f4be4f943b229f7bda694bb08385682cece/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a61ae8a41b5ee6689d25072d91db1f4be4f943b229f7bda694bb08385682cece/diff: no such file or directory, extraDiskErr: <nil>
	Dec 26 21:55:03 addons-154736 kubelet[1365]: E1226 21:55:03.487876    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="207a63c3-f7e0-4270-aa76-3681a7e5658c"
	Dec 26 21:55:03 addons-154736 kubelet[1365]: E1226 21:55:03.488267    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="89cc57b4-3f60-4a69-b7d3-dbc25226b9c0"
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.712064    1365 manager.go:1106] Failed to create existing container: /docker/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/crio-d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f: Error finding container d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f: Status 404 returned error can't find the container with id d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.712260    1365 manager.go:1106] Failed to create existing container: /crio-9de0ffc80a888993dda0cf8d7dfbc8404cd150a9b2fabac624d21d9e640d3e38: Error finding container 9de0ffc80a888993dda0cf8d7dfbc8404cd150a9b2fabac624d21d9e640d3e38: Status 404 returned error can't find the container with id 9de0ffc80a888993dda0cf8d7dfbc8404cd150a9b2fabac624d21d9e640d3e38
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.712418    1365 manager.go:1106] Failed to create existing container: /crio-e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d: Error finding container e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d: Status 404 returned error can't find the container with id e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.712567    1365 manager.go:1106] Failed to create existing container: /crio-d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f: Error finding container d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f: Status 404 returned error can't find the container with id d929e20c5935e81c71048d8651d982f22977ffc67f098b4b6530407ffeaa3f0f
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.712707    1365 manager.go:1106] Failed to create existing container: /crio-88bbfca1fd18479c52e5a80c7f33f86b611a658500dfd5677be53da7cb2a5271: Error finding container 88bbfca1fd18479c52e5a80c7f33f86b611a658500dfd5677be53da7cb2a5271: Status 404 returned error can't find the container with id 88bbfca1fd18479c52e5a80c7f33f86b611a658500dfd5677be53da7cb2a5271
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.712868    1365 manager.go:1106] Failed to create existing container: /docker/0927c77a91cb3abbb70268b3a742f5c0c803d1cbb42dd1efdc24f53bd33e9c94/crio-e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d: Error finding container e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d: Status 404 returned error can't find the container with id e0e51a4b6edc7ce00a271701cfc3682cb172087a8a07a3eee24537d16438244d
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.768589    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bc0fd6958f94b2af68d66aebfa5a93fdf17ef0a2f16914164d37a3d6c1f47504/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bc0fd6958f94b2af68d66aebfa5a93fdf17ef0a2f16914164d37a3d6c1f47504/diff: no such file or directory, extraDiskErr: <nil>
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.776003    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/47b4a472ab43e6738e94069f5a35204ecad97d8f2ec79132203dc4a015ea166e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/47b4a472ab43e6738e94069f5a35204ecad97d8f2ec79132203dc4a015ea166e/diff: no such file or directory, extraDiskErr: <nil>
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.776019    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/00695726537d13a01c67ff06a7fae7867c7554f4b45abef0d2fedcf3241a7c45/diff" to get inode usage: stat /var/lib/containers/storage/overlay/00695726537d13a01c67ff06a7fae7867c7554f4b45abef0d2fedcf3241a7c45/diff: no such file or directory, extraDiskErr: <nil>
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.781374    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5cf772e9a600f01aaf8234df1eedde33b958f911afea74b479d05696357835c5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5cf772e9a600f01aaf8234df1eedde33b958f911afea74b479d05696357835c5/diff: no such file or directory, extraDiskErr: <nil>
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.849893    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2f49872ed6f4a2d544c642d174e3bcfaa973f396c3147aac730930e8430503d9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2f49872ed6f4a2d544c642d174e3bcfaa973f396c3147aac730930e8430503d9/diff: no such file or directory, extraDiskErr: <nil>
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.884915    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f0f7030f1ab62f84c7091cb8779f8667fb6fce25fefac65ecc65e74592bea34f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f0f7030f1ab62f84c7091cb8779f8667fb6fce25fefac65ecc65e74592bea34f/diff: no such file or directory, extraDiskErr: <nil>
	Dec 26 21:55:04 addons-154736 kubelet[1365]: E1226 21:55:04.924672    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5cf772e9a600f01aaf8234df1eedde33b958f911afea74b479d05696357835c5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5cf772e9a600f01aaf8234df1eedde33b958f911afea74b479d05696357835c5/diff: no such file or directory, extraDiskErr: <nil>
	Dec 26 21:55:06 addons-154736 kubelet[1365]: I1226 21:55:06.486185    1365 scope.go:117] "RemoveContainer" containerID="5fe6705eae1fbc89ec665967bdfcef33c4d35b3192b0d5e54fd47a92656d5772"
	Dec 26 21:55:06 addons-154736 kubelet[1365]: E1226 21:55:06.486445    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89"
	Dec 26 21:55:17 addons-154736 kubelet[1365]: E1226 21:55:17.488052    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="207a63c3-f7e0-4270-aa76-3681a7e5658c"
	Dec 26 21:55:21 addons-154736 kubelet[1365]: I1226 21:55:21.486257    1365 scope.go:117] "RemoveContainer" containerID="5fe6705eae1fbc89ec665967bdfcef33c4d35b3192b0d5e54fd47a92656d5772"
	Dec 26 21:55:21 addons-154736 kubelet[1365]: E1226 21:55:21.486526    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="e6c2ffcd-377b-48c7-b5ca-3ac9d28f4f89"
	
	
	==> storage-provisioner [2ca195417d20cd7d770bd0d4ca4ba2c4f87c603396ace0f89dc95113a10a3c0f] <==
	I1226 21:46:53.461435       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1226 21:46:53.482276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1226 21:46:53.482442       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1226 21:46:53.499286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1226 21:46:53.499560       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-154736_8b77c78b-51e5-498d-bad2-c4833c8e2aec!
	I1226 21:46:53.501777       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f84c5a7-4067-41c5-b8c4-76d4700df79e", APIVersion:"v1", ResourceVersion:"884", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-154736_8b77c78b-51e5-498d-bad2-c4833c8e2aec became leader
	I1226 21:46:53.600700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-154736_8b77c78b-51e5-498d-bad2-c4833c8e2aec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-154736 -n addons-154736
helpers_test.go:261: (dbg) Run:  kubectl --context addons-154736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod-restore ingress-nginx-admission-create-jtzt2 ingress-nginx-admission-patch-gwrdr
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-154736 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-jtzt2 ingress-nginx-admission-patch-gwrdr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-154736 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-jtzt2 ingress-nginx-admission-patch-gwrdr: exit status 1 (122.769921ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-154736/192.168.49.2
	Start Time:       Tue, 26 Dec 2023 21:48:59 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mttn4 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-mttn4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m24s                 default-scheduler  Successfully assigned default/nginx to addons-154736
	  Warning  Failed     5m53s                 kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m51s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m1s (x4 over 6m24s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     91s (x4 over 5m53s)   kubelet            Error: ErrImagePull
	  Warning  Failed     91s (x2 over 4m23s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    66s (x7 over 5m53s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     66s (x7 over 5m53s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-154736/192.168.49.2
	Start Time:       Tue, 26 Dec 2023 21:49:19 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gjg5 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-6gjg5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m4s                   default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-154736
	  Warning  Failed     2m21s (x2 over 3m52s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    100s (x4 over 6m3s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     46s (x2 over 4m53s)    kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:73a06b3a2577448f9acc23502a0cb4d41919da9cc5035e66b0a9a09715397684 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     46s (x4 over 4m53s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    6s (x7 over 4m53s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     6s (x7 over 4m53s)     kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jtzt2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gwrdr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-154736 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-jtzt2 ingress-nginx-admission-patch-gwrdr: exit status 1
--- FAIL: TestAddons/parallel/CSI (403.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (190.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d0995a50-650c-4928-83fc-533af41c36fc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003715532s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-262391 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-262391 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-262391 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-262391 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [265d1a5f-6c1b-41a9-830c-235ee9d55ffe] Pending
helpers_test.go:344: "sp-pod" [265d1a5f-6c1b-41a9-830c-235ee9d55ffe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1226 22:03:11.961419  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 22:03:39.648734  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-262391 -n functional-262391
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2023-12-26 22:04:23.991669003 +0000 UTC m=+1184.644643858
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-262391 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-262391 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-262391/192.168.49.2
Start Time:       Tue, 26 Dec 2023 22:01:23 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4hc7k (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-4hc7k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  3m1s                default-scheduler  Successfully assigned default/sp-pod to functional-262391
Warning  Failed     51s (x2 over 2m6s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     51s (x2 over 2m6s)  kubelet            Error: ErrImagePull
Normal   BackOff    40s (x2 over 2m6s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     40s (x2 over 2m6s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    28s (x3 over 3m1s)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-262391 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-262391 logs sp-pod -n default: exit status 1 (113.096989ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-262391 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-262391
helpers_test.go:235: (dbg) docker inspect functional-262391:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf",
	        "Created": "2023-12-26T21:58:28.884688462Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 717786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T21:58:29.210132221Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/hosts",
	        "LogPath": "/var/lib/docker/containers/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf-json.log",
	        "Name": "/functional-262391",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-262391:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-262391",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/00cbe638b8c702d782fd71428fce6e33400c14e7d9f6d750537c8c238a6eae3a-init/diff:/var/lib/docker/overlay2/45396a29879cab7c8a67d68e40c59b67c1c0ba964e9ed87a152af8cc5862c477/diff",
	                "MergedDir": "/var/lib/docker/overlay2/00cbe638b8c702d782fd71428fce6e33400c14e7d9f6d750537c8c238a6eae3a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/00cbe638b8c702d782fd71428fce6e33400c14e7d9f6d750537c8c238a6eae3a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/00cbe638b8c702d782fd71428fce6e33400c14e7d9f6d750537c8c238a6eae3a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-262391",
	                "Source": "/var/lib/docker/volumes/functional-262391/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-262391",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-262391",
	                "name.minikube.sigs.k8s.io": "functional-262391",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "104ab2df16957d6ba98436c367fffde800a78d234814c5d8691dc97c19a3c649",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33681"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33680"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33677"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33679"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33678"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/104ab2df1695",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-262391": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0f329dae541e",
	                        "functional-262391"
	                    ],
	                    "NetworkID": "038a4b5ef3d68f6ef4dd3c1b1f8aaf7070f2955dc32144f6c885c304d30c400a",
	                    "EndpointID": "76f1ef576704b6de0150926e0166e3a58fbef43c9b5de830607b124766d25cc7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-262391 -n functional-262391
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 logs -n 25: (1.929432791s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-262391 ssh                                                    | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:00 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-262391 cache reload                                           | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:00 UTC | 26 Dec 23 22:00 UTC |
	| ssh     | functional-262391 ssh                                                    | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:00 UTC | 26 Dec 23 22:00 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 26 Dec 23 22:00 UTC | 26 Dec 23 22:00 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 26 Dec 23 22:00 UTC | 26 Dec 23 22:00 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-262391 kubectl --                                             | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:00 UTC | 26 Dec 23 22:00 UTC |
	|         | --context functional-262391                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-262391                                                     | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:00 UTC | 26 Dec 23 22:01 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	| service | invalid-svc -p                                                           | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC |                     |
	|         | functional-262391                                                        |                   |         |         |                     |                     |
	| config  | functional-262391 config unset                                           | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| cp      | functional-262391 cp                                                     | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-262391 config get                                             | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-262391 config set                                             | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | cpus 2                                                                   |                   |         |         |                     |                     |
	| config  | functional-262391 config get                                             | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-262391 ssh -n                                                 | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | functional-262391 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-262391 config unset                                           | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-262391 config get                                             | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-262391 ssh echo                                               | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | hello                                                                    |                   |         |         |                     |                     |
	| cp      | functional-262391 cp                                                     | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | functional-262391:/home/docker/cp-test.txt                               |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd963562061/001/cp-test.txt                |                   |         |         |                     |                     |
	| ssh     | functional-262391 ssh cat                                                | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | /etc/hostname                                                            |                   |         |         |                     |                     |
	| ssh     | functional-262391 ssh -n                                                 | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | functional-262391 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| tunnel  | functional-262391 tunnel                                                 | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-262391 tunnel                                                 | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| cp      | functional-262391 cp                                                     | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| ssh     | functional-262391 ssh -n                                                 | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC | 26 Dec 23 22:01 UTC |
	|         | functional-262391 sudo cat                                               |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| tunnel  | functional-262391 tunnel                                                 | functional-262391 | jenkins | v1.32.0 | 26 Dec 23 22:01 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 22:00:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 22:00:28.077567  722547 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:00:28.077707  722547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:00:28.077711  722547 out.go:309] Setting ErrFile to fd 2...
	I1226 22:00:28.077715  722547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:00:28.077994  722547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:00:28.078352  722547 out.go:303] Setting JSON to false
	I1226 22:00:28.079291  722547 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20562,"bootTime":1703607466,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:00:28.079356  722547 start.go:138] virtualization:  
	I1226 22:00:28.081876  722547 out.go:177] * [functional-262391] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 22:00:28.084451  722547 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:00:28.084648  722547 notify.go:220] Checking for updates...
	I1226 22:00:28.086567  722547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:00:28.088590  722547 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:00:28.090400  722547 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:00:28.091983  722547 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:00:28.093704  722547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:00:28.096176  722547 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:00:28.096280  722547 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:00:28.120506  722547 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:00:28.120617  722547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:00:28.203054  722547 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-26 22:00:28.192248636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:00:28.203165  722547 docker.go:295] overlay module found
	I1226 22:00:28.205492  722547 out.go:177] * Using the docker driver based on existing profile
	I1226 22:00:28.207414  722547 start.go:298] selected driver: docker
	I1226 22:00:28.207423  722547 start.go:902] validating driver "docker" against &{Name:functional-262391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-262391 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:00:28.207508  722547 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:00:28.207613  722547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:00:28.287087  722547 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-26 22:00:28.275911702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:00:28.287531  722547 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 22:00:28.287576  722547 cni.go:84] Creating CNI manager for ""
	I1226 22:00:28.287587  722547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:00:28.287598  722547 start_flags.go:323] config:
	{Name:functional-262391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-262391 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:00:28.291198  722547 out.go:177] * Starting control plane node functional-262391 in cluster functional-262391
	I1226 22:00:28.293106  722547 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:00:28.295029  722547 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:00:28.297201  722547 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:00:28.297250  722547 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1226 22:00:28.297257  722547 cache.go:56] Caching tarball of preloaded images
	I1226 22:00:28.297280  722547 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 22:00:28.297337  722547 preload.go:174] Found /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1226 22:00:28.297346  722547 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1226 22:00:28.297466  722547 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/config.json ...
	I1226 22:00:28.315242  722547 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 22:00:28.315256  722547 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 22:00:28.315275  722547 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:00:28.315323  722547 start.go:365] acquiring machines lock for functional-262391: {Name:mkd624895a4124116431e841da065e96d170c675 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:00:28.315389  722547 start.go:369] acquired machines lock for "functional-262391" in 41.648µs
	I1226 22:00:28.315422  722547 start.go:96] Skipping create...Using existing machine configuration
	I1226 22:00:28.315427  722547 fix.go:54] fixHost starting: 
	I1226 22:00:28.315697  722547 cli_runner.go:164] Run: docker container inspect functional-262391 --format={{.State.Status}}
	I1226 22:00:28.334365  722547 fix.go:102] recreateIfNeeded on functional-262391: state=Running err=<nil>
	W1226 22:00:28.334393  722547 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 22:00:28.336687  722547 out.go:177] * Updating the running docker "functional-262391" container ...
	I1226 22:00:28.338639  722547 machine.go:88] provisioning docker machine ...
	I1226 22:00:28.338672  722547 ubuntu.go:169] provisioning hostname "functional-262391"
	I1226 22:00:28.338745  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:00:28.358928  722547 main.go:141] libmachine: Using SSH client type: native
	I1226 22:00:28.359433  722547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33681 <nil> <nil>}
	I1226 22:00:28.359445  722547 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-262391 && echo "functional-262391" | sudo tee /etc/hostname
	I1226 22:00:28.516200  722547 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-262391
	
	I1226 22:00:28.516274  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:00:28.539952  722547 main.go:141] libmachine: Using SSH client type: native
	I1226 22:00:28.540340  722547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33681 <nil> <nil>}
	I1226 22:00:28.540355  722547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-262391' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-262391/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-262391' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:00:28.685949  722547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:00:28.685965  722547 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 22:00:28.685994  722547 ubuntu.go:177] setting up certificates
	I1226 22:00:28.686004  722547 provision.go:83] configureAuth start
	I1226 22:00:28.686069  722547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-262391
	I1226 22:00:28.705691  722547 provision.go:138] copyHostCerts
	I1226 22:00:28.705766  722547 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem, removing ...
	I1226 22:00:28.705788  722547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:00:28.705861  722547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 22:00:28.705961  722547 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem, removing ...
	I1226 22:00:28.705965  722547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:00:28.705989  722547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 22:00:28.706043  722547 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem, removing ...
	I1226 22:00:28.706049  722547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:00:28.706071  722547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 22:00:28.706111  722547 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.functional-262391 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-262391]
	I1226 22:00:29.887340  722547 provision.go:172] copyRemoteCerts
	I1226 22:00:29.887421  722547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:00:29.887471  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:00:29.908213  722547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
	I1226 22:00:30.031735  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1226 22:00:30.078161  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:00:30.114221  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:00:30.150210  722547 provision.go:86] duration metric: configureAuth took 1.464169306s
	I1226 22:00:30.150239  722547 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:00:30.150456  722547 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:00:30.150560  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:00:30.171705  722547 main.go:141] libmachine: Using SSH client type: native
	I1226 22:00:30.172118  722547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33681 <nil> <nil>}
	I1226 22:00:30.172133  722547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:00:35.621496  722547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:00:35.621509  722547 machine.go:91] provisioned docker machine in 7.2828605s
	I1226 22:00:35.621519  722547 start.go:300] post-start starting for "functional-262391" (driver="docker")
	I1226 22:00:35.621538  722547 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:00:35.621615  722547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:00:35.621656  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:00:35.641450  722547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
	I1226 22:00:35.743339  722547 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:00:35.747685  722547 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:00:35.747711  722547 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:00:35.747720  722547 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:00:35.747727  722547 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 22:00:35.747736  722547 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 22:00:35.747794  722547 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 22:00:35.747879  722547 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> 7030362.pem in /etc/ssl/certs
	I1226 22:00:35.747957  722547 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/test/nested/copy/703036/hosts -> hosts in /etc/test/nested/copy/703036
	I1226 22:00:35.748000  722547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/703036
	I1226 22:00:35.758519  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:00:35.786952  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/test/nested/copy/703036/hosts --> /etc/test/nested/copy/703036/hosts (40 bytes)
	I1226 22:00:35.815590  722547 start.go:303] post-start completed in 194.056661ms
	I1226 22:00:35.815660  722547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:00:35.815698  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:00:35.836942  722547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
	I1226 22:00:35.935418  722547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:00:35.941700  722547 fix.go:56] fixHost completed within 7.626265465s
	I1226 22:00:35.941725  722547 start.go:83] releasing machines lock for "functional-262391", held for 7.626318747s
	I1226 22:00:35.941792  722547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-262391
	I1226 22:00:35.959472  722547 ssh_runner.go:195] Run: cat /version.json
	I1226 22:00:35.959488  722547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:00:35.959516  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:00:35.959523  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:00:35.979276  722547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
	I1226 22:00:35.986776  722547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
	I1226 22:00:36.081395  722547 ssh_runner.go:195] Run: systemctl --version
	I1226 22:00:36.227930  722547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:00:36.378456  722547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:00:36.384171  722547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:00:36.394811  722547 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:00:36.394896  722547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:00:36.406083  722547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1226 22:00:36.406097  722547 start.go:475] detecting cgroup driver to use...
	I1226 22:00:36.406127  722547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:00:36.406174  722547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:00:36.420610  722547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:00:36.434077  722547 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:00:36.434129  722547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:00:36.450077  722547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:00:36.464184  722547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 22:00:36.601520  722547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:00:36.743683  722547 docker.go:219] disabling docker service ...
	I1226 22:00:36.743750  722547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:00:36.762919  722547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:00:36.776505  722547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:00:36.907739  722547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:00:37.046877  722547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:00:37.061954  722547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:00:37.083181  722547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1226 22:00:37.083238  722547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:00:37.095502  722547 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 22:00:37.095577  722547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:00:37.107861  722547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:00:37.120491  722547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:00:37.133017  722547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 22:00:37.144416  722547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 22:00:37.154923  722547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 22:00:37.165717  722547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:00:37.306326  722547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 22:00:43.142279  722547 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.835926872s)
	I1226 22:00:43.142299  722547 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 22:00:43.142353  722547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 22:00:43.147176  722547 start.go:543] Will wait 60s for crictl version
	I1226 22:00:43.147229  722547 ssh_runner.go:195] Run: which crictl
	I1226 22:00:43.151603  722547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 22:00:43.196006  722547 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 22:00:43.196084  722547 ssh_runner.go:195] Run: crio --version
	I1226 22:00:43.240454  722547 ssh_runner.go:195] Run: crio --version
	I1226 22:00:43.292076  722547 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1226 22:00:43.293737  722547 cli_runner.go:164] Run: docker network inspect functional-262391 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:00:43.311983  722547 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1226 22:00:43.318919  722547 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1226 22:00:43.320691  722547 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:00:43.320766  722547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:00:43.367915  722547 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 22:00:43.367928  722547 crio.go:415] Images already preloaded, skipping extraction
	I1226 22:00:43.367981  722547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:00:43.409081  722547 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 22:00:43.409095  722547 cache_images.go:84] Images are preloaded, skipping loading
	I1226 22:00:43.409174  722547 ssh_runner.go:195] Run: crio config
	I1226 22:00:43.462669  722547 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1226 22:00:43.462697  722547 cni.go:84] Creating CNI manager for ""
	I1226 22:00:43.462705  722547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:00:43.462714  722547 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 22:00:43.462735  722547 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-262391 NodeName:functional-262391 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 22:00:43.462876  722547 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-262391"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 22:00:43.462950  722547 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=functional-262391 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-262391 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1226 22:00:43.463012  722547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 22:00:43.473762  722547 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 22:00:43.473823  722547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 22:00:43.484175  722547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (427 bytes)
	I1226 22:00:43.505234  722547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 22:00:43.526879  722547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1948 bytes)
	I1226 22:00:43.548396  722547 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1226 22:00:43.553114  722547 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391 for IP: 192.168.49.2
	I1226 22:00:43.553137  722547 certs.go:190] acquiring lock for shared ca certs: {Name:mke6488a150c186a525017f74b8a69a9f5240d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:00:43.553290  722547 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key
	I1226 22:00:43.553329  722547 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key
	I1226 22:00:43.553398  722547 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.key
	I1226 22:00:43.553447  722547 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/apiserver.key.dd3b5fb2
	I1226 22:00:43.553493  722547 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/proxy-client.key
	I1226 22:00:43.553628  722547 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem (1338 bytes)
	W1226 22:00:43.553654  722547 certs.go:433] ignoring /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036_empty.pem, impossibly tiny 0 bytes
	I1226 22:00:43.553662  722547 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem (1675 bytes)
	I1226 22:00:43.553688  722547 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem (1082 bytes)
	I1226 22:00:43.553711  722547 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem (1123 bytes)
	I1226 22:00:43.553732  722547 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem (1679 bytes)
	I1226 22:00:43.553779  722547 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:00:43.554518  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 22:00:43.583140  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1226 22:00:43.611782  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 22:00:43.640223  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1226 22:00:43.669676  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 22:00:43.698487  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 22:00:43.727275  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 22:00:43.755853  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 22:00:43.784903  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem --> /usr/share/ca-certificates/703036.pem (1338 bytes)
	I1226 22:00:43.814218  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /usr/share/ca-certificates/7030362.pem (1708 bytes)
	I1226 22:00:43.843528  722547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 22:00:43.872256  722547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 22:00:43.893667  722547 ssh_runner.go:195] Run: openssl version
	I1226 22:00:43.900745  722547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/703036.pem && ln -fs /usr/share/ca-certificates/703036.pem /etc/ssl/certs/703036.pem"
	I1226 22:00:43.912680  722547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/703036.pem
	I1226 22:00:43.917263  722547 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 21:58 /usr/share/ca-certificates/703036.pem
	I1226 22:00:43.917320  722547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/703036.pem
	I1226 22:00:43.926128  722547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/703036.pem /etc/ssl/certs/51391683.0"
	I1226 22:00:43.937392  722547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7030362.pem && ln -fs /usr/share/ca-certificates/7030362.pem /etc/ssl/certs/7030362.pem"
	I1226 22:00:43.949381  722547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7030362.pem
	I1226 22:00:43.953970  722547 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 21:58 /usr/share/ca-certificates/7030362.pem
	I1226 22:00:43.954036  722547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7030362.pem
	I1226 22:00:43.962837  722547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7030362.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 22:00:43.973970  722547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 22:00:43.985565  722547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:00:43.990601  722547 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:00:43.990662  722547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:00:43.999545  722547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 22:00:44.016353  722547 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 22:00:44.021173  722547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1226 22:00:44.030337  722547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1226 22:00:44.039063  722547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1226 22:00:44.048003  722547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1226 22:00:44.057012  722547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1226 22:00:44.065764  722547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1226 22:00:44.074648  722547 kubeadm.go:404] StartCluster: {Name:functional-262391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-262391 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:00:44.074743  722547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 22:00:44.074802  722547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 22:00:44.161200  722547 cri.go:89] found id: "c822d55e3c7244a97ad01609df5b78c3fce79bd4eec64ad507404b68998a4d17"
	I1226 22:00:44.161212  722547 cri.go:89] found id: "e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52"
	I1226 22:00:44.161217  722547 cri.go:89] found id: "8c61210b5431b6b508e2020ed0dd8a0c96b01bd1f23453dbb7c3388e4cec5be9"
	I1226 22:00:44.161221  722547 cri.go:89] found id: "1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8"
	I1226 22:00:44.161224  722547 cri.go:89] found id: "a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77"
	I1226 22:00:44.161230  722547 cri.go:89] found id: "966133bd698f243db888d7ef30b4d6b869951a41045f90da8aad478517af91f1"
	I1226 22:00:44.161235  722547 cri.go:89] found id: "d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242"
	I1226 22:00:44.161238  722547 cri.go:89] found id: "0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623"
	I1226 22:00:44.161241  722547 cri.go:89] found id: "b2c24d01056cce186e9a79409051bea69c46dee8a050dbc29367a57fd6eae353"
	I1226 22:00:44.161247  722547 cri.go:89] found id: "83c55d47ff5e935f16789cfbc0fc8bab7c9a4bf757439bd724b64460894f5e51"
	I1226 22:00:44.161251  722547 cri.go:89] found id: "6207fb81b48065f5af35d56fd8b3167db0ada159d9f4222fae7353f737384f6a"
	I1226 22:00:44.161254  722547 cri.go:89] found id: "98e8aac979f46fea23543f19ff47e6bc4dcc0c65d0987b740953535e388d0092"
	I1226 22:00:44.161257  722547 cri.go:89] found id: "15834f50e1c0634720103dd691d7f396f3d713be5acb1c38130bd0723fa9d1a8"
	I1226 22:00:44.161263  722547 cri.go:89] found id: "545494fc13bba801136324a819a23116e43adc0b25ddd0b728dbff63726bda54"
	I1226 22:00:44.161266  722547 cri.go:89] found id: ""
	I1226 22:00:44.161315  722547 ssh_runner.go:195] Run: sudo runc list -f json
	I1226 22:00:44.199369  722547 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623/userdata","rootfs":"/var/lib/containers/storage/overlay/e6f9c7b2649a94c4245c21b8d83869f433959f5576317b997dba4b697ab6d040/merged","created":"2023-12-26T21:59:58.005039581Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e1639c7a","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e1639c7a\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:57.930808397Z","io.kubernetes.cri-o.Image":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri-o.ImageRef":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-262391\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4e848c64ec041214c76d5b254470a706\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-262391_4e848c64ec041214c76d5b254470a706/kube-scheduler/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",
\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e6f9c7b2649a94c4245c21b8d83869f433959f5576317b997dba4b697ab6d040/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-262391_kube-system_4e848c64ec041214c76d5b254470a706_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/366e6735458fedfa28cfa70967455c9882b1068bc47fbc77eb10a3c732b36f54/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"366e6735458fedfa28cfa70967455c9882b1068bc47fbc77eb10a3c732b36f54","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-262391_kube-system_4e848c64ec041214c76d5b254470a706_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4e848c64ec041214c76d5b254470a706/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_rel
abel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4e848c64ec041214c76d5b254470a706/containers/kube-scheduler/cae3b81f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-262391","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4e848c64ec041214c76d5b254470a706","kubernetes.io/config.hash":"4e848c64ec041214c76d5b254470a706","kubernetes.io/config.seen":"2023-12-26T21:58:44.256144332Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"15834f50e1c0634720103dd691d7f396f3d713be5acb1c38130bd0723fa9d1a8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/15834f50e1c0634720103dd691d7f396f3d713be5acb1c38130bd0723
fa9d1a8/userdata","rootfs":"/var/lib/containers/storage/overlay/2c37b6f96a67b2d7e480fbe80ea2c6dc9377e9e8443de9eb812392cc2e63749e/merged","created":"2023-12-26T21:59:46.197031125Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"62225a40","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"62225a40\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\
",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"15834f50e1c0634720103dd691d7f396f3d713be5acb1c38130bd0723fa9d1a8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:45.956486274Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-rvzcn\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\
":\"d1d1dee1-9964-4293-b138-a8cba4d4a1b9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-rvzcn_d1d1dee1-9964-4293-b138-a8cba4d4a1b9/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2c37b6f96a67b2d7e480fbe80ea2c6dc9377e9e8443de9eb812392cc2e63749e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-rvzcn_kube-system_d1d1dee1-9964-4293-b138-a8cba4d4a1b9_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-rvzcn_kube-system_d1d1dee1-9964-4293-b138-a8cba4d4a1b9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.c
ri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/d1d1dee1-9964-4293-b138-a8cba4d4a1b9/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d1d1dee1-9964-4293-b138-a8cba4d4a1b9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d1d1dee1-9964-4293-b138-a8cba4d4a1b9/containers/coredns/669fb498\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d1d1dee1-9964-4293-b138-a8cba4d4a1b9/volumes/kubernetes.io~projected/kube-api-access-k49lp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-rvzcn","io.kubernetes.pod.namespace":"ku
be-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d1d1dee1-9964-4293-b138-a8cba4d4a1b9","kubernetes.io/config.seen":"2023-12-26T21:59:36.030593117Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8/userdata","rootfs":"/var/lib/containers/storage/overlay/46f087607222d23d65b874e64b3479830553447b842a0bee6d8ad26d7bf4ea98/merged","created":"2023-12-26T21:59:58.276848051Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b60ddd3e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.con
tainer.hash\":\"b60ddd3e\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:58.091964135Z","io.kubernetes.cri-o.Image":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri-o.ImageRef":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-262391\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"3e22602710f9a895d535604465d6ab
72\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-262391_3e22602710f9a895d535604465d6ab72/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/46f087607222d23d65b874e64b3479830553447b842a0bee6d8ad26d7bf4ea98/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-262391_kube-system_3e22602710f9a895d535604465d6ab72_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ac3b0cf8e4b5b312e960910cb2b32e65a942cf16f1fac38ba8edfbddf7c7793d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ac3b0cf8e4b5b312e960910cb2b32e65a942cf16f1fac38ba8edfbddf7c7793d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-262391_kube-system_3e22602710f9a895d535604465d6ab72_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kub
ernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/3e22602710f9a895d535604465d6ab72/containers/kube-controller-manager/747c3f01\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/3e22602710f9a895d535604465d6ab72/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\"
,\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-262391","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"3e22602710f9a895d535604465d6ab72","kubernetes.io/config.hash":"3e22602710f9a895d535604465d6ab72","kubernetes.io/config.seen":"2023-12-26T21:58:44.256143077Z","kubernetes.io/config.source":"file
"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"545494fc13bba801136324a819a23116e43adc0b25ddd0b728dbff63726bda54","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/545494fc13bba801136324a819a23116e43adc0b25ddd0b728dbff63726bda54/userdata","rootfs":"/var/lib/containers/storage/overlay/dbd9b792b815ccc850e73d5ac3ecd2e7975581ef3d8af89a22f63b2e9c5d75c7/merged","created":"2023-12-26T21:59:46.187835709Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"55ae7856","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"55ae7856\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernete
s.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"545494fc13bba801136324a819a23116e43adc0b25ddd0b728dbff63726bda54","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:45.925397009Z","io.kubernetes.cri-o.Image":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri-o.ImageRef":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-262391\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"df6200335158d08f59d995a75303187b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-262391_df6200335158d08f59d995a75303187b/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.Mou
ntPoint":"/var/lib/containers/storage/overlay/dbd9b792b815ccc850e73d5ac3ecd2e7975581ef3d8af89a22f63b2e9c5d75c7/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-262391_kube-system_df6200335158d08f59d995a75303187b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cb35f2f6fa9bf27026f38376801e38eea4561a9f629ff5339aa0bf5d18c1139c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cb35f2f6fa9bf27026f38376801e38eea4561a9f629ff5339aa0bf5d18c1139c","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-262391_kube-system_df6200335158d08f59d995a75303187b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/df6200335158d08f59d995a75303187b/containers/kube-apiserver/44f93602\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\
":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/df6200335158d08f59d995a75303187b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-262391","io.ku
bernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"df6200335158d08f59d995a75303187b","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"df6200335158d08f59d995a75303187b","kubernetes.io/config.seen":"2023-12-26T21:58:44.256134954Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6207fb81b48065f5af35d56fd8b3167db0ada159d9f4222fae7353f737384f6a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6207fb81b48065f5af35d56fd8b3167db0ada159d9f4222fae7353f737384f6a/userdata","rootfs":"/var/lib/containers/storage/overlay/d251e42c242a820a5fb469fa7d2496bedf2d5a0b78b98e4f64627d9207902e25/merged","created":"2023-12-26T21:59:46.23756494Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cc0ac28c","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.t
erminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cc0ac28c\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6207fb81b48065f5af35d56fd8b3167db0ada159d9f4222fae7353f737384f6a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:45.982940625Z","io.kubernetes.cri-o.Image":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri-o.ImageRef":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-9m7p9\
",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c8184d93-3481-4aa6-bbda-6f73ecb4ee2e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-9m7p9_c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d251e42c242a820a5fb469fa7d2496bedf2d5a0b78b98e4f64627d9207902e25/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-9m7p9_kube-system_c8184d93-3481-4aa6-bbda-6f73ecb4ee2e_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-9m7p9_kube-system_c8184d93-3481-4aa6-bbda-6f73ecb4ee2e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"fals
e","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/containers/kube-proxy/7c33ce51\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container
_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/volumes/kubernetes.io~projected/kube-api-access-hqstw\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-9m7p9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c8184d93-3481-4aa6-bbda-6f73ecb4ee2e","kubernetes.io/config.seen":"2023-12-26T21:59:04.932604429Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8c61210b5431b6b508e2020ed0dd8a0c96b01bd1f23453dbb7c3388e4cec5be9","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8c61210b5431b6b508e2020ed0dd8a0c96b01bd1f23453dbb7c3388e4cec5be9/userdata","rootfs":"/var/lib/containers/storage/overlay/5ffceda8474d3eef9ee31929048476a6259edcea4a696b4ce8b5a610ce691c2c/merged","created":"2023-12-26T21:59:58.27596753Z","annotations":{"io.container.manager":"cri-
o","io.kubernetes.container.hash":"55ae7856","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"55ae7856\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8c61210b5431b6b508e2020ed0dd8a0c96b01bd1f23453dbb7c3388e4cec5be9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:58.118598938Z","io.kubernetes.cri-o.Image":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri-o.ImageRef":"04b4c447bb9d4840af3bf7e83
6397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-262391\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"df6200335158d08f59d995a75303187b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-262391_df6200335158d08f59d995a75303187b/kube-apiserver/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5ffceda8474d3eef9ee31929048476a6259edcea4a696b4ce8b5a610ce691c2c/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-262391_kube-system_df6200335158d08f59d995a75303187b_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cb35f2f6fa9bf27026f38376801e38eea4561a9f629ff5339aa0bf5d18c1139c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cb35f2f6fa9bf27026f38376801e38eea
4561a9f629ff5339aa0bf5d18c1139c","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-262391_kube-system_df6200335158d08f59d995a75303187b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/df6200335158d08f59d995a75303187b/containers/kube-apiserver/18039c4b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/df6200335158d08f59d995a75303187b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selin
ux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-262391","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"df6200335158d08f59d995a75303187b","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"df6200335158d08f59d995a75303187b","kubernetes.io/config.seen":"2023-12-26T21:58:44.256134954Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"966133bd698f243db888d7ef30b4d6b86995
1a41045f90da8aad478517af91f1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/966133bd698f243db888d7ef30b4d6b869951a41045f90da8aad478517af91f1/userdata","rootfs":"/var/lib/containers/storage/overlay/bc3ebacb4370729bf822444c829e3715029fd1f5fad73d6a7dfc302bdb147f19/merged","created":"2023-12-26T21:59:58.269023517Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"71d8d38b","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"71d8d38b\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"966133bd698f243db888d7
ef30b4d6b869951a41045f90da8aad478517af91f1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:58.027531275Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-262391\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2992b81af20454dbc013086a174a1ba7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-262391_2992b81af20454dbc013086a174a1ba7/etcd/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bc3ebacb4370729bf822444c829e3715029fd1f5fad73d6a7dfc302bdb147f19/merged","io.kubernetes.cri-o.Name":"k8s_etcd_e
tcd-functional-262391_kube-system_2992b81af20454dbc013086a174a1ba7_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/04fe9bb0de4db7857fe6aa33597481a9ee720ad67bece900c5b70753748a4820/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"04fe9bb0de4db7857fe6aa33597481a9ee720ad67bece900c5b70753748a4820","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-262391_kube-system_2992b81af20454dbc013086a174a1ba7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2992b81af20454dbc013086a174a1ba7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2992b81af20454dbc013086a174a1ba7/containers/etcd/09dfedbc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"con
tainer_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-262391","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2992b81af20454dbc013086a174a1ba7","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"2992b81af20454dbc013086a174a1ba7","kubernetes.io/config.seen":"2023-12-26T21:58:44.256146695Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"98e8aac979f46fea23543f19ff47e6bc4dcc0c65d0987b740953535e388d0092","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/98e8aac979f46fea23543f19ff47e6bc4dcc0c65d0987b740953535e388d0092/userdata","
rootfs":"/var/lib/containers/storage/overlay/e1f8e2dab20b23a1c46bd975534f7b1d734fb88e783d62b8ff4707bc6dec701d/merged","created":"2023-12-26T21:59:46.240643712Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"21c8b9cf","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"21c8b9cf\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"98e8aac979f46fea23543f19ff47e6bc4dcc0c65d0987b740953535e388d0092","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:45.962954088Z","io.kubernetes.c
ri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d0995a50-650c-4928-83fc-533af41c36fc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_d0995a50-650c-4928-83fc-533af41c36fc/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e1f8e2dab20b23a1c46bd975534f7b1d734fb88e783d62b8ff4707bc6dec701d/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_d0995a50-650c-4928-83fc-533af41c36fc_1","io.kubernetes.cri-o.Res
olvPath":"/run/containers/storage/overlay-containers/995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_d0995a50-650c-4928-83fc-533af41c36fc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d0995a50-650c-4928-83fc-533af41c36fc/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d0995a50-650c-4928-83fc-533af41c36fc/containers/storage-provisioner/adaffa4b\",\"readonly\":false,\"pro
pagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d0995a50-650c-4928-83fc-533af41c36fc/volumes/kubernetes.io~projected/kube-api-access-475zx\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d0995a50-650c-4928-83fc-533af41c36fc","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mount
Path\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-12-26T21:59:36.036905965Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77/userdata","rootfs":"/var/lib/containers/storage/overlay/c4f7506b2222c1ce1740a835fdc2479f246c9d5eb99d93109bed4afc955875f7/merged","created":"2023-12-26T21:59:58.331142765Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"21c8b9cf","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMess
agePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"21c8b9cf\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:58.059291211Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.
pod.uid\":\"d0995a50-650c-4928-83fc-533af41c36fc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_d0995a50-650c-4928-83fc-533af41c36fc/storage-provisioner/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c4f7506b2222c1ce1740a835fdc2479f246c9d5eb99d93109bed4afc955875f7/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_d0995a50-650c-4928-83fc-533af41c36fc_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_d0995a50-650c-4928-83fc-533af41c36fc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinO
nce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d0995a50-650c-4928-83fc-533af41c36fc/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d0995a50-650c-4928-83fc-533af41c36fc/containers/storage-provisioner/419a5a4c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d0995a50-650c-4928-83fc-533af41c36fc/volumes/kubernetes.io~projected/kube-api-access-475zx\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernet
es.pod.uid":"d0995a50-650c-4928-83fc-533af41c36fc","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-12-26T21:59:36.036905965Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2c24d01056cce186e9a79409051bea69c46dee8a050dbc29367a57fd6eae353","pid":0,"status":"stopped","bundle":"/run/cont
ainers/storage/overlay-containers/b2c24d01056cce186e9a79409051bea69c46dee8a050dbc29367a57fd6eae353/userdata","rootfs":"/var/lib/containers/storage/overlay/cf500b598921e0475d916120485c64ab429394eac00c42ab7dfe3861829130d6/merged","created":"2023-12-26T21:59:46.20206831Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"171df1b7","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"171df1b7\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b2c24d01056cce186e9a79409051bea69c46dee8a050dbc29367a57fd6eae353","io.kubernetes.cri-o.Conta
inerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:46.01393518Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-cbhzk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"3d9633f7-0f86-48fa-a658-29ed4c3dc8b9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-cbhzk_3d9633f7-0f86-48fa-a658-29ed4c3dc8b9/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/cf500b598921e0475d916120485c64ab429394eac00c42ab7dfe3861829130d6/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-cbhzk_kube-system_3d9633f7-0f
86-48fa-a658-29ed4c3dc8b9_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-cbhzk_kube-system_3d9633f7-0f86-48fa-a658-29ed4c3dc8b9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/3d9633f7-0f86-48fa-a658-29ed4c3dc8b9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux
_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/3d9633f7-0f86-48fa-a658-29ed4c3dc8b9/containers/kindnet-cni/0836b898\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/3d9633f7-0f86-48fa-a658-29ed4c3dc8b9/volumes/kubernetes.io~projected/kube-api-access-kt5bv\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-cbhzk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"3d9633f7-0f86-48fa-a658-29ed4c3dc8b9","kubernetes.io/config.seen":"2023-12-26T21:59:04.930528596Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c822d55e3c7244a97ad01609df5b78c3fce79bd4eec6
4ad507404b68998a4d17","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/c822d55e3c7244a97ad01609df5b78c3fce79bd4eec64ad507404b68998a4d17/userdata","rootfs":"/var/lib/containers/storage/overlay/ee3db6ae751c338538b9f35e2d0f07a11cf1b5e2c8b3019944dc06dbbb4d4b25/merged","created":"2023-12-26T22:00:17.664018469Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"62225a40","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"62225a40\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"conta
inerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c822d55e3c7244a97ad01609df5b78c3fce79bd4eec64ad507404b68998a4d17","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T22:00:17.6292816Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.
container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-rvzcn\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d1d1dee1-9964-4293-b138-a8cba4d4a1b9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-rvzcn_d1d1dee1-9964-4293-b138-a8cba4d4a1b9/coredns/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ee3db6ae751c338538b9f35e2d0f07a11cf1b5e2c8b3019944dc06dbbb4d4b25/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-rvzcn_kube-system_d1d1dee1-9964-4293-b138-a8cba4d4a1b9_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-rvzcn_kube-system_d1d1dee1-9964-4293-
b138-a8cba4d4a1b9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/d1d1dee1-9964-4293-b138-a8cba4d4a1b9/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d1d1dee1-9964-4293-b138-a8cba4d4a1b9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d1d1dee1-9964-4293-b138-a8cba4d4a1b9/containers/coredns/ad1e97bc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d1d1dee1-9964-4293-b138-a8cba4d4a1b9/volumes/kubernetes.io~projected/kube-api-access-k
49lp\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-rvzcn","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d1d1dee1-9964-4293-b138-a8cba4d4a1b9","kubernetes.io/config.seen":"2023-12-26T21:59:36.030593117Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242/userdata","rootfs":"/var/lib/containers/storage/overlay/f7eff73e1621ed4ec9ee4a34cb55de4c67460572b1141bd723a7d3bce211b96d/merged","created":"2023-12-26T21:59:58.070115242Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"cc0ac28c","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessa
gePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"cc0ac28c\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:57.98959868Z","io.kubernetes.cri-o.Image":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri-o.ImageRef":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-9m7p9\",\"io.kubernete
s.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c8184d93-3481-4aa6-bbda-6f73ecb4ee2e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-9m7p9_c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/kube-proxy/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f7eff73e1621ed4ec9ee4a34cb55de4c67460572b1141bd723a7d3bce211b96d/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-9m7p9_kube-system_c8184d93-3481-4aa6-bbda-6f73ecb4ee2e_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-9m7p9_kube-system_c8184d93-3481-4aa6-bbda-6f73ecb4ee2e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernete
s.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/containers/kube-proxy/4972998b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/r
un/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c8184d93-3481-4aa6-bbda-6f73ecb4ee2e/volumes/kubernetes.io~projected/kube-api-access-hqstw\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-9m7p9","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c8184d93-3481-4aa6-bbda-6f73ecb4ee2e","kubernetes.io/config.seen":"2023-12-26T21:59:04.932604429Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52/userdata","rootfs":"/var/lib/containers/storage/overlay/fc79cf5acf1d3d19667c697227173f3a05c724ff5e552dd3600682d86eb5fdd6/merged","created":"2023-12-26T21:59:58.219071333Z","annotations":{"io.container.manager":"cri-o","io.kubernet
es.container.hash":"171df1b7","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"171df1b7\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T21:59:58.150791906Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31
db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-cbhzk\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"3d9633f7-0f86-48fa-a658-29ed4c3dc8b9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-cbhzk_3d9633f7-0f86-48fa-a658-29ed4c3dc8b9/kindnet-cni/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fc79cf5acf1d3d19667c697227173f3a05c724ff5e552dd3600682d86eb5fdd6/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-cbhzk_kube-system_3d9633f7-0f86-48fa-a658-29ed4c3dc8b9_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013","io.kubernetes.cri-o.SandboxName":"
k8s_kindnet-cbhzk_kube-system_3d9633f7-0f86-48fa-a658-29ed4c3dc8b9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/3d9633f7-0f86-48fa-a658-29ed4c3dc8b9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/3d9633f7-0f86-48fa-a658-29ed4c3dc8b9/containers/kindnet-cni/010f5a85\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,
\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/3d9633f7-0f86-48fa-a658-29ed4c3dc8b9/volumes/kubernetes.io~projected/kube-api-access-kt5bv\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-cbhzk","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"3d9633f7-0f86-48fa-a658-29ed4c3dc8b9","kubernetes.io/config.seen":"2023-12-26T21:59:04.930528596Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I1226 22:00:44.200300  722547 cri.go:126] list returned 13 containers
	I1226 22:00:44.200309  722547 cri.go:129] container: {ID:0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623 Status:stopped}
	I1226 22:00:44.200322  722547 cri.go:135] skipping {0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200331  722547 cri.go:129] container: {ID:15834f50e1c0634720103dd691d7f396f3d713be5acb1c38130bd0723fa9d1a8 Status:stopped}
	I1226 22:00:44.200337  722547 cri.go:135] skipping {15834f50e1c0634720103dd691d7f396f3d713be5acb1c38130bd0723fa9d1a8 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200342  722547 cri.go:129] container: {ID:1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8 Status:stopped}
	I1226 22:00:44.200356  722547 cri.go:135] skipping {1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200362  722547 cri.go:129] container: {ID:545494fc13bba801136324a819a23116e43adc0b25ddd0b728dbff63726bda54 Status:stopped}
	I1226 22:00:44.200367  722547 cri.go:135] skipping {545494fc13bba801136324a819a23116e43adc0b25ddd0b728dbff63726bda54 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200372  722547 cri.go:129] container: {ID:6207fb81b48065f5af35d56fd8b3167db0ada159d9f4222fae7353f737384f6a Status:stopped}
	I1226 22:00:44.200377  722547 cri.go:135] skipping {6207fb81b48065f5af35d56fd8b3167db0ada159d9f4222fae7353f737384f6a stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200382  722547 cri.go:129] container: {ID:8c61210b5431b6b508e2020ed0dd8a0c96b01bd1f23453dbb7c3388e4cec5be9 Status:stopped}
	I1226 22:00:44.200387  722547 cri.go:135] skipping {8c61210b5431b6b508e2020ed0dd8a0c96b01bd1f23453dbb7c3388e4cec5be9 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200393  722547 cri.go:129] container: {ID:966133bd698f243db888d7ef30b4d6b869951a41045f90da8aad478517af91f1 Status:stopped}
	I1226 22:00:44.200398  722547 cri.go:135] skipping {966133bd698f243db888d7ef30b4d6b869951a41045f90da8aad478517af91f1 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200403  722547 cri.go:129] container: {ID:98e8aac979f46fea23543f19ff47e6bc4dcc0c65d0987b740953535e388d0092 Status:stopped}
	I1226 22:00:44.200409  722547 cri.go:135] skipping {98e8aac979f46fea23543f19ff47e6bc4dcc0c65d0987b740953535e388d0092 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200414  722547 cri.go:129] container: {ID:a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77 Status:stopped}
	I1226 22:00:44.200419  722547 cri.go:135] skipping {a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200424  722547 cri.go:129] container: {ID:b2c24d01056cce186e9a79409051bea69c46dee8a050dbc29367a57fd6eae353 Status:stopped}
	I1226 22:00:44.200429  722547 cri.go:135] skipping {b2c24d01056cce186e9a79409051bea69c46dee8a050dbc29367a57fd6eae353 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200434  722547 cri.go:129] container: {ID:c822d55e3c7244a97ad01609df5b78c3fce79bd4eec64ad507404b68998a4d17 Status:stopped}
	I1226 22:00:44.200440  722547 cri.go:135] skipping {c822d55e3c7244a97ad01609df5b78c3fce79bd4eec64ad507404b68998a4d17 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200444  722547 cri.go:129] container: {ID:d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242 Status:stopped}
	I1226 22:00:44.200451  722547 cri.go:135] skipping {d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200456  722547 cri.go:129] container: {ID:e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52 Status:stopped}
	I1226 22:00:44.200461  722547 cri.go:135] skipping {e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52 stopped}: state = "stopped", want "paused"
	I1226 22:00:44.200510  722547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 22:00:44.216652  722547 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1226 22:00:44.216663  722547 kubeadm.go:636] restartCluster start
	I1226 22:00:44.216716  722547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1226 22:00:44.230132  722547 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1226 22:00:44.230674  722547 kubeconfig.go:92] found "functional-262391" server: "https://192.168.49.2:8441"
	I1226 22:00:44.232128  722547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1226 22:00:44.245508  722547 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-12-26 21:58:35.021070333 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-12-26 22:00:43.542120555 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1226 22:00:44.245517  722547 kubeadm.go:1135] stopping kube-system containers ...
	I1226 22:00:44.245527  722547 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1226 22:00:44.245596  722547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 22:00:44.306649  722547 cri.go:89] found id: "c822d55e3c7244a97ad01609df5b78c3fce79bd4eec64ad507404b68998a4d17"
	I1226 22:00:44.306661  722547 cri.go:89] found id: "e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52"
	I1226 22:00:44.306666  722547 cri.go:89] found id: "8c61210b5431b6b508e2020ed0dd8a0c96b01bd1f23453dbb7c3388e4cec5be9"
	I1226 22:00:44.306670  722547 cri.go:89] found id: "1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8"
	I1226 22:00:44.306680  722547 cri.go:89] found id: "a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77"
	I1226 22:00:44.306685  722547 cri.go:89] found id: "966133bd698f243db888d7ef30b4d6b869951a41045f90da8aad478517af91f1"
	I1226 22:00:44.306688  722547 cri.go:89] found id: "d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242"
	I1226 22:00:44.306691  722547 cri.go:89] found id: "0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623"
	I1226 22:00:44.306694  722547 cri.go:89] found id: ""
	I1226 22:00:44.306698  722547 cri.go:234] Stopping containers: [c822d55e3c7244a97ad01609df5b78c3fce79bd4eec64ad507404b68998a4d17 e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52 8c61210b5431b6b508e2020ed0dd8a0c96b01bd1f23453dbb7c3388e4cec5be9 1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8 a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77 966133bd698f243db888d7ef30b4d6b869951a41045f90da8aad478517af91f1 d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242 0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623]
	I1226 22:00:44.306754  722547 ssh_runner.go:195] Run: which crictl
	I1226 22:00:44.311474  722547 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 c822d55e3c7244a97ad01609df5b78c3fce79bd4eec64ad507404b68998a4d17 e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52 8c61210b5431b6b508e2020ed0dd8a0c96b01bd1f23453dbb7c3388e4cec5be9 1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8 a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77 966133bd698f243db888d7ef30b4d6b869951a41045f90da8aad478517af91f1 d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242 0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623
	I1226 22:00:44.380046  722547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1226 22:00:44.484101  722547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 22:00:44.495336  722547 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Dec 26 21:58 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 26 21:58 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Dec 26 21:58 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 26 21:58 /etc/kubernetes/scheduler.conf
	
	I1226 22:00:44.495406  722547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1226 22:00:44.506561  722547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1226 22:00:44.517963  722547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1226 22:00:44.528626  722547 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1226 22:00:44.528687  722547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1226 22:00:44.539176  722547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1226 22:00:44.549681  722547 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1226 22:00:44.549743  722547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1226 22:00:44.560074  722547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 22:00:44.570670  722547 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1226 22:00:44.570684  722547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 22:00:44.633408  722547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 22:00:46.789441  722547 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.156007448s)
	I1226 22:00:46.789460  722547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1226 22:00:46.994625  722547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 22:00:47.063467  722547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1226 22:00:47.137462  722547 api_server.go:52] waiting for apiserver process to appear ...
	I1226 22:00:47.137531  722547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:00:47.638531  722547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:00:48.138257  722547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:00:48.165045  722547 api_server.go:72] duration metric: took 1.027582148s to wait for apiserver process to appear ...
	I1226 22:00:48.165060  722547 api_server.go:88] waiting for apiserver healthz status ...
	I1226 22:00:48.165078  722547 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1226 22:00:48.165327  722547 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I1226 22:00:48.665630  722547 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1226 22:00:51.757992  722547 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1226 22:00:51.758009  722547 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1226 22:00:51.758027  722547 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1226 22:00:51.910545  722547 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1226 22:00:51.910578  722547 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1226 22:00:52.165783  722547 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1226 22:00:52.176799  722547 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1226 22:00:52.176822  722547 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1226 22:00:52.665393  722547 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1226 22:00:52.681424  722547 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1226 22:00:52.681446  722547 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1226 22:00:53.165153  722547 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1226 22:00:53.174003  722547 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1226 22:00:53.189570  722547 api_server.go:141] control plane version: v1.28.4
	I1226 22:00:53.189588  722547 api_server.go:131] duration metric: took 5.024522659s to wait for apiserver health ...
	I1226 22:00:53.189596  722547 cni.go:84] Creating CNI manager for ""
	I1226 22:00:53.189605  722547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:00:53.191460  722547 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 22:00:53.193250  722547 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 22:00:53.198343  722547 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 22:00:53.198363  722547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 22:00:53.220550  722547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 22:00:54.014694  722547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 22:00:54.027888  722547 system_pods.go:59] 8 kube-system pods found
	I1226 22:00:54.027913  722547 system_pods.go:61] "coredns-5dd5756b68-rvzcn" [d1d1dee1-9964-4293-b138-a8cba4d4a1b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1226 22:00:54.027921  722547 system_pods.go:61] "etcd-functional-262391" [e9912570-29e4-428e-9bcf-4c0441887af0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1226 22:00:54.027926  722547 system_pods.go:61] "kindnet-cbhzk" [3d9633f7-0f86-48fa-a658-29ed4c3dc8b9] Running
	I1226 22:00:54.027934  722547 system_pods.go:61] "kube-apiserver-functional-262391" [83b75db3-64e3-4cce-b172-f13888f3b83d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1226 22:00:54.027944  722547 system_pods.go:61] "kube-controller-manager-functional-262391" [2876bbc5-3381-4505-b57e-0ab5ed52475b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1226 22:00:54.027949  722547 system_pods.go:61] "kube-proxy-9m7p9" [c8184d93-3481-4aa6-bbda-6f73ecb4ee2e] Running
	I1226 22:00:54.027957  722547 system_pods.go:61] "kube-scheduler-functional-262391" [5c50dfd3-64fd-4086-8283-71fb371ac3e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1226 22:00:54.027961  722547 system_pods.go:61] "storage-provisioner" [d0995a50-650c-4928-83fc-533af41c36fc] Running
	I1226 22:00:54.027967  722547 system_pods.go:74] duration metric: took 13.262153ms to wait for pod list to return data ...
	I1226 22:00:54.027975  722547 node_conditions.go:102] verifying NodePressure condition ...
	I1226 22:00:54.034687  722547 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1226 22:00:54.034706  722547 node_conditions.go:123] node cpu capacity is 2
	I1226 22:00:54.034717  722547 node_conditions.go:105] duration metric: took 6.737404ms to run NodePressure ...
	I1226 22:00:54.034734  722547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1226 22:00:54.251601  722547 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1226 22:00:54.257609  722547 kubeadm.go:787] kubelet initialised
	I1226 22:00:54.257619  722547 kubeadm.go:788] duration metric: took 6.004769ms waiting for restarted kubelet to initialise ...
	I1226 22:00:54.257626  722547 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:00:54.266658  722547 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-rvzcn" in "kube-system" namespace to be "Ready" ...
	I1226 22:00:55.773886  722547 pod_ready.go:92] pod "coredns-5dd5756b68-rvzcn" in "kube-system" namespace has status "Ready":"True"
	I1226 22:00:55.773899  722547 pod_ready.go:81] duration metric: took 1.507226832s waiting for pod "coredns-5dd5756b68-rvzcn" in "kube-system" namespace to be "Ready" ...
	I1226 22:00:55.773909  722547 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:00:57.780878  722547 pod_ready.go:102] pod "etcd-functional-262391" in "kube-system" namespace has status "Ready":"False"
	I1226 22:00:59.781146  722547 pod_ready.go:102] pod "etcd-functional-262391" in "kube-system" namespace has status "Ready":"False"
	I1226 22:01:00.781170  722547 pod_ready.go:92] pod "etcd-functional-262391" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:00.781182  722547 pod_ready.go:81] duration metric: took 5.007266592s waiting for pod "etcd-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:00.781203  722547 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:00.788066  722547 pod_ready.go:92] pod "kube-apiserver-functional-262391" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:00.788085  722547 pod_ready.go:81] duration metric: took 6.869176ms waiting for pod "kube-apiserver-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:00.788095  722547 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:00.795473  722547 pod_ready.go:92] pod "kube-controller-manager-functional-262391" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:00.795485  722547 pod_ready.go:81] duration metric: took 7.383551ms waiting for pod "kube-controller-manager-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:00.795495  722547 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9m7p9" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:00.802165  722547 pod_ready.go:92] pod "kube-proxy-9m7p9" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:00.802176  722547 pod_ready.go:81] duration metric: took 6.67563ms waiting for pod "kube-proxy-9m7p9" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:00.802186  722547 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:02.809067  722547 pod_ready.go:92] pod "kube-scheduler-functional-262391" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:02.809079  722547 pod_ready.go:81] duration metric: took 2.006885302s waiting for pod "kube-scheduler-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:02.809090  722547 pod_ready.go:38] duration metric: took 8.551455764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:01:02.809115  722547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 22:01:02.818458  722547 ops.go:34] apiserver oom_adj: -16
	I1226 22:01:02.818470  722547 kubeadm.go:640] restartCluster took 18.601801805s
	I1226 22:01:02.818478  722547 kubeadm.go:406] StartCluster complete in 18.743842059s
	I1226 22:01:02.818499  722547 settings.go:142] acquiring lock: {Name:mk1b89d623875ac96830001bdd0fc2b8d8c10aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:01:02.818577  722547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:01:02.819255  722547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/kubeconfig: {Name:mk171fc32e21f516abb68bc5ebeb628b3c1d7f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:01:02.819464  722547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 22:01:02.819735  722547 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:01:02.819883  722547 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1226 22:01:02.819941  722547 addons.go:69] Setting storage-provisioner=true in profile "functional-262391"
	I1226 22:01:02.819950  722547 addons.go:237] Setting addon storage-provisioner=true in "functional-262391"
	W1226 22:01:02.819955  722547 addons.go:246] addon storage-provisioner should already be in state true
	I1226 22:01:02.819996  722547 host.go:66] Checking if "functional-262391" exists ...
	I1226 22:01:02.820326  722547 addons.go:69] Setting default-storageclass=true in profile "functional-262391"
	I1226 22:01:02.820338  722547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-262391"
	I1226 22:01:02.820429  722547 cli_runner.go:164] Run: docker container inspect functional-262391 --format={{.State.Status}}
	I1226 22:01:02.820628  722547 cli_runner.go:164] Run: docker container inspect functional-262391 --format={{.State.Status}}
	I1226 22:01:02.837382  722547 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-262391" context rescaled to 1 replicas
	I1226 22:01:02.837410  722547 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 22:01:02.839735  722547 out.go:177] * Verifying Kubernetes components...
	I1226 22:01:02.841732  722547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:01:02.851517  722547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:01:02.855370  722547 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:01:02.855390  722547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 22:01:02.855487  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:01:02.884285  722547 addons.go:237] Setting addon default-storageclass=true in "functional-262391"
	W1226 22:01:02.884296  722547 addons.go:246] addon default-storageclass should already be in state true
	I1226 22:01:02.884319  722547 host.go:66] Checking if "functional-262391" exists ...
	I1226 22:01:02.886378  722547 cli_runner.go:164] Run: docker container inspect functional-262391 --format={{.State.Status}}
	I1226 22:01:02.920241  722547 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 22:01:02.920253  722547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 22:01:02.920315  722547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
	I1226 22:01:02.920643  722547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
	I1226 22:01:02.978390  722547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
	I1226 22:01:03.021163  722547 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1226 22:01:03.021203  722547 node_ready.go:35] waiting up to 6m0s for node "functional-262391" to be "Ready" ...
	I1226 22:01:03.030364  722547 node_ready.go:49] node "functional-262391" has status "Ready":"True"
	I1226 22:01:03.030376  722547 node_ready.go:38] duration metric: took 9.162104ms waiting for node "functional-262391" to be "Ready" ...
	I1226 22:01:03.030386  722547 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:01:03.043302  722547 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rvzcn" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:03.077625  722547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:01:03.117363  722547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 22:01:03.181064  722547 pod_ready.go:92] pod "coredns-5dd5756b68-rvzcn" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:03.181084  722547 pod_ready.go:81] duration metric: took 137.750727ms waiting for pod "coredns-5dd5756b68-rvzcn" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:03.181095  722547 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:03.580360  722547 pod_ready.go:92] pod "etcd-functional-262391" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:03.580372  722547 pod_ready.go:81] duration metric: took 399.270784ms waiting for pod "etcd-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:03.580392  722547 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:03.605616  722547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1226 22:01:03.611170  722547 addons.go:508] enable addons completed in 791.29658ms: enabled=[storage-provisioner default-storageclass]
	I1226 22:01:03.978053  722547 pod_ready.go:92] pod "kube-apiserver-functional-262391" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:03.978064  722547 pod_ready.go:81] duration metric: took 397.66483ms waiting for pod "kube-apiserver-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:03.978075  722547 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:04.378003  722547 pod_ready.go:92] pod "kube-controller-manager-functional-262391" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:04.378014  722547 pod_ready.go:81] duration metric: took 399.932758ms waiting for pod "kube-controller-manager-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:04.378025  722547 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9m7p9" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:04.781368  722547 pod_ready.go:92] pod "kube-proxy-9m7p9" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:04.781384  722547 pod_ready.go:81] duration metric: took 403.348901ms waiting for pod "kube-proxy-9m7p9" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:04.781395  722547 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:05.179180  722547 pod_ready.go:92] pod "kube-scheduler-functional-262391" in "kube-system" namespace has status "Ready":"True"
	I1226 22:01:05.179193  722547 pod_ready.go:81] duration metric: took 397.791588ms waiting for pod "kube-scheduler-functional-262391" in "kube-system" namespace to be "Ready" ...
	I1226 22:01:05.179205  722547 pod_ready.go:38] duration metric: took 2.148811021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:01:05.179220  722547 api_server.go:52] waiting for apiserver process to appear ...
	I1226 22:01:05.179286  722547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:01:05.194352  722547 api_server.go:72] duration metric: took 2.356914523s to wait for apiserver process to appear ...
	I1226 22:01:05.194367  722547 api_server.go:88] waiting for apiserver healthz status ...
	I1226 22:01:05.194390  722547 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1226 22:01:05.204155  722547 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1226 22:01:05.205463  722547 api_server.go:141] control plane version: v1.28.4
	I1226 22:01:05.205487  722547 api_server.go:131] duration metric: took 11.114354ms to wait for apiserver health ...
	I1226 22:01:05.205497  722547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 22:01:05.381378  722547 system_pods.go:59] 8 kube-system pods found
	I1226 22:01:05.381393  722547 system_pods.go:61] "coredns-5dd5756b68-rvzcn" [d1d1dee1-9964-4293-b138-a8cba4d4a1b9] Running
	I1226 22:01:05.381398  722547 system_pods.go:61] "etcd-functional-262391" [e9912570-29e4-428e-9bcf-4c0441887af0] Running
	I1226 22:01:05.381402  722547 system_pods.go:61] "kindnet-cbhzk" [3d9633f7-0f86-48fa-a658-29ed4c3dc8b9] Running
	I1226 22:01:05.381407  722547 system_pods.go:61] "kube-apiserver-functional-262391" [83b75db3-64e3-4cce-b172-f13888f3b83d] Running
	I1226 22:01:05.381412  722547 system_pods.go:61] "kube-controller-manager-functional-262391" [2876bbc5-3381-4505-b57e-0ab5ed52475b] Running
	I1226 22:01:05.381416  722547 system_pods.go:61] "kube-proxy-9m7p9" [c8184d93-3481-4aa6-bbda-6f73ecb4ee2e] Running
	I1226 22:01:05.381420  722547 system_pods.go:61] "kube-scheduler-functional-262391" [5c50dfd3-64fd-4086-8283-71fb371ac3e7] Running
	I1226 22:01:05.381425  722547 system_pods.go:61] "storage-provisioner" [d0995a50-650c-4928-83fc-533af41c36fc] Running
	I1226 22:01:05.381430  722547 system_pods.go:74] duration metric: took 175.927033ms to wait for pod list to return data ...
	I1226 22:01:05.381437  722547 default_sa.go:34] waiting for default service account to be created ...
	I1226 22:01:05.577538  722547 default_sa.go:45] found service account: "default"
	I1226 22:01:05.577552  722547 default_sa.go:55] duration metric: took 196.109667ms for default service account to be created ...
	I1226 22:01:05.577559  722547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 22:01:05.781758  722547 system_pods.go:86] 8 kube-system pods found
	I1226 22:01:05.781773  722547 system_pods.go:89] "coredns-5dd5756b68-rvzcn" [d1d1dee1-9964-4293-b138-a8cba4d4a1b9] Running
	I1226 22:01:05.781778  722547 system_pods.go:89] "etcd-functional-262391" [e9912570-29e4-428e-9bcf-4c0441887af0] Running
	I1226 22:01:05.781782  722547 system_pods.go:89] "kindnet-cbhzk" [3d9633f7-0f86-48fa-a658-29ed4c3dc8b9] Running
	I1226 22:01:05.781786  722547 system_pods.go:89] "kube-apiserver-functional-262391" [83b75db3-64e3-4cce-b172-f13888f3b83d] Running
	I1226 22:01:05.781790  722547 system_pods.go:89] "kube-controller-manager-functional-262391" [2876bbc5-3381-4505-b57e-0ab5ed52475b] Running
	I1226 22:01:05.781795  722547 system_pods.go:89] "kube-proxy-9m7p9" [c8184d93-3481-4aa6-bbda-6f73ecb4ee2e] Running
	I1226 22:01:05.781800  722547 system_pods.go:89] "kube-scheduler-functional-262391" [5c50dfd3-64fd-4086-8283-71fb371ac3e7] Running
	I1226 22:01:05.781804  722547 system_pods.go:89] "storage-provisioner" [d0995a50-650c-4928-83fc-533af41c36fc] Running
	I1226 22:01:05.781810  722547 system_pods.go:126] duration metric: took 204.245866ms to wait for k8s-apps to be running ...
	I1226 22:01:05.781818  722547 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 22:01:05.781877  722547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:01:05.796049  722547 system_svc.go:56] duration metric: took 14.221704ms WaitForService to wait for kubelet.
	I1226 22:01:05.796075  722547 kubeadm.go:581] duration metric: took 2.958645384s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 22:01:05.796095  722547 node_conditions.go:102] verifying NodePressure condition ...
	I1226 22:01:05.977551  722547 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1226 22:01:05.977566  722547 node_conditions.go:123] node cpu capacity is 2
	I1226 22:01:05.977576  722547 node_conditions.go:105] duration metric: took 181.476453ms to run NodePressure ...
	I1226 22:01:05.977586  722547 start.go:228] waiting for startup goroutines ...
	I1226 22:01:05.977607  722547 start.go:233] waiting for cluster config update ...
	I1226 22:01:05.977616  722547 start.go:242] writing updated cluster config ...
	I1226 22:01:05.977906  722547 ssh_runner.go:195] Run: rm -f paused
	I1226 22:01:06.053519  722547 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1226 22:01:06.055540  722547 out.go:177] * Done! kubectl is now configured to use "functional-262391" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 26 22:01:48 functional-262391 crio[4353]: time="2023-12-26 22:01:48.362293550Z" level=info msg="Image docker.io/nginx:alpine not found" id=d05fe0cb-dea9-4c66-9b3d-2a54764fd722 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:02:01 functional-262391 crio[4353]: time="2023-12-26 22:02:01.176845480Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=973fa3c1-8803-4cc7-b1e0-8402593d0f4e name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:02:01 functional-262391 crio[4353]: time="2023-12-26 22:02:01.177065955Z" level=info msg="Image docker.io/nginx:alpine not found" id=973fa3c1-8803-4cc7-b1e0-8402593d0f4e name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:02:18 functional-262391 crio[4353]: time="2023-12-26 22:02:18.172732214Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=746740cf-2bfb-4443-83f0-a282e2f62bcf name=/runtime.v1.ImageService/PullImage
	Dec 26 22:02:18 functional-262391 crio[4353]: time="2023-12-26 22:02:18.174822792Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 26 22:02:18 functional-262391 crio[4353]: time="2023-12-26 22:02:18.421094589Z" level=info msg="Checking image status: docker.io/nginx:latest" id=8330013d-bed2-4867-b0e8-dbaedafa61e6 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:02:18 functional-262391 crio[4353]: time="2023-12-26 22:02:18.421320151Z" level=info msg="Image docker.io/nginx:latest not found" id=8330013d-bed2-4867-b0e8-dbaedafa61e6 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:02:32 functional-262391 crio[4353]: time="2023-12-26 22:02:32.176486694Z" level=info msg="Checking image status: docker.io/nginx:latest" id=c2b9dd5d-f74b-4742-9d2b-b63f90ba5d1a name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:02:32 functional-262391 crio[4353]: time="2023-12-26 22:02:32.176886036Z" level=info msg="Image docker.io/nginx:latest not found" id=c2b9dd5d-f74b-4742-9d2b-b63f90ba5d1a name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:03:02 functional-262391 crio[4353]: time="2023-12-26 22:03:02.719948022Z" level=info msg="Pulling image: docker.io/nginx:latest" id=bae736a3-c5d7-4dd0-b876-675855ef52ae name=/runtime.v1.ImageService/PullImage
	Dec 26 22:03:02 functional-262391 crio[4353]: time="2023-12-26 22:03:02.723187061Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 26 22:03:17 functional-262391 crio[4353]: time="2023-12-26 22:03:17.177122817Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=aaf663dd-d12d-4780-9a4c-f4fba32e61c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:03:17 functional-262391 crio[4353]: time="2023-12-26 22:03:17.177348042Z" level=info msg="Image docker.io/nginx:alpine not found" id=aaf663dd-d12d-4780-9a4c-f4fba32e61c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:03:32 functional-262391 crio[4353]: time="2023-12-26 22:03:32.176710999Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=12a800c0-c985-46bf-86d6-272c734f19dd name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:03:32 functional-262391 crio[4353]: time="2023-12-26 22:03:32.176942337Z" level=info msg="Image docker.io/nginx:alpine not found" id=12a800c0-c985-46bf-86d6-272c734f19dd name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:03:33 functional-262391 crio[4353]: time="2023-12-26 22:03:33.024418162Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=0dd3e81c-8451-423f-929c-977f5288af50 name=/runtime.v1.ImageService/PullImage
	Dec 26 22:03:33 functional-262391 crio[4353]: time="2023-12-26 22:03:33.027015920Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 26 22:03:44 functional-262391 crio[4353]: time="2023-12-26 22:03:44.177674401Z" level=info msg="Checking image status: docker.io/nginx:latest" id=9a702ccb-a735-4bee-a847-836407b6164b name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:03:44 functional-262391 crio[4353]: time="2023-12-26 22:03:44.177983005Z" level=info msg="Image docker.io/nginx:latest not found" id=9a702ccb-a735-4bee-a847-836407b6164b name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:03:56 functional-262391 crio[4353]: time="2023-12-26 22:03:56.176891624Z" level=info msg="Checking image status: docker.io/nginx:latest" id=3538ada2-7ae8-4a7a-8e7e-bbc9f55ebcf2 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:03:56 functional-262391 crio[4353]: time="2023-12-26 22:03:56.177113814Z" level=info msg="Image docker.io/nginx:latest not found" id=3538ada2-7ae8-4a7a-8e7e-bbc9f55ebcf2 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:04:03 functional-262391 crio[4353]: time="2023-12-26 22:04:03.302802347Z" level=info msg="Pulling image: docker.io/nginx:latest" id=b8e8a73e-96f3-4f21-8b79-4c167696e7a5 name=/runtime.v1.ImageService/PullImage
	Dec 26 22:04:03 functional-262391 crio[4353]: time="2023-12-26 22:04:03.304923627Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 26 22:04:18 functional-262391 crio[4353]: time="2023-12-26 22:04:18.176782389Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=863190fc-638c-4d2a-8593-e84c22c5b6bb name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:04:18 functional-262391 crio[4353]: time="2023-12-26 22:04:18.177022531Z" level=info msg="Image docker.io/nginx:alpine not found" id=863190fc-638c-4d2a-8593-e84c22c5b6bb name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2c7b8ff0a10f       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   3 minutes ago       Running             kube-proxy                3                   ced93628785c0       kube-proxy-9m7p9
	f30c2ed26c999       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   3 minutes ago       Running             coredns                   3                   fef827a057919       coredns-5dd5756b68-rvzcn
	312553732c0c5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       3                   995e88ed74fef       storage-provisioner
	1df6c7623610b       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   3 minutes ago       Running             kindnet-cni               3                   d60ff4eb1f096       kindnet-cbhzk
	bb588759e5092       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   3 minutes ago       Running             kube-apiserver            0                   9b95fb9e6a9db       kube-apiserver-functional-262391
	4dcd4a3ad372e       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   3 minutes ago       Running             kube-scheduler            3                   366e6735458fe       kube-scheduler-functional-262391
	07811118ec726       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   3 minutes ago       Running             etcd                      3                   04fe9bb0de4db       etcd-functional-262391
	e1ece719ec1a1       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   3 minutes ago       Running             kube-controller-manager   3                   ac3b0cf8e4b5b       kube-controller-manager-functional-262391
	c822d55e3c724       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   4 minutes ago       Exited              coredns                   2                   fef827a057919       coredns-5dd5756b68-rvzcn
	e26ee425462b7       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   4 minutes ago       Exited              kindnet-cni               2                   d60ff4eb1f096       kindnet-cbhzk
	1ef2a1a3e47eb       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   4 minutes ago       Exited              kube-controller-manager   2                   ac3b0cf8e4b5b       kube-controller-manager-functional-262391
	a34d8d5ed9ea4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       2                   995e88ed74fef       storage-provisioner
	966133bd698f2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   4 minutes ago       Exited              etcd                      2                   04fe9bb0de4db       etcd-functional-262391
	d8f430ff9b1cd       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   4 minutes ago       Exited              kube-proxy                2                   ced93628785c0       kube-proxy-9m7p9
	0d6b996118725       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   4 minutes ago       Exited              kube-scheduler            2                   366e6735458fe       kube-scheduler-functional-262391
	
	
	==> coredns [c822d55e3c7244a97ad01609df5b78c3fce79bd4eec64ad507404b68998a4d17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33914 - 41630 "HINFO IN 7247736621702071472.4972413562285624387. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014654245s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f30c2ed26c999a125de4024a8dce2f72ab6cbcd08e87802732cf7d514d32e293] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34002 - 51540 "HINFO IN 95307415092295182.2909860349863719764. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.014536766s
	
	
	==> describe nodes <==
	Name:               functional-262391
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-262391
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=functional-262391
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T21_58_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 21:58:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-262391
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 22:04:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 22:00:52 +0000   Tue, 26 Dec 2023 21:58:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 22:00:52 +0000   Tue, 26 Dec 2023 21:58:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 22:00:52 +0000   Tue, 26 Dec 2023 21:58:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 22:00:52 +0000   Tue, 26 Dec 2023 21:59:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-262391
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e6d1292674d43f19296ade50af320f8
	  System UUID:                b32e1f24-f323-4570-af86-7c639bf8b151
	  Boot ID:                    f8f887b2-8c20-433d-a967-90e814370f09
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-5dd5756b68-rvzcn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m20s
	  kube-system                 etcd-functional-262391                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m33s
	  kube-system                 kindnet-cbhzk                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m21s
	  kube-system                 kube-apiserver-functional-262391             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 kube-controller-manager-functional-262391    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 kube-proxy-9m7p9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-scheduler-functional-262391             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m19s                  kube-proxy       
	  Normal   Starting                 3m32s                  kube-proxy       
	  Normal   Starting                 4m22s                  kube-proxy       
	  Normal   Starting                 5m41s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node functional-262391 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node functional-262391 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m41s (x8 over 5m41s)  kubelet          Node functional-262391 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     5m33s                  kubelet          Node functional-262391 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m33s                  kubelet          Node functional-262391 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m33s                  kubelet          Node functional-262391 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m33s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m21s                  node-controller  Node functional-262391 event: Registered Node functional-262391 in Controller
	  Normal   NodeReady                4m50s                  kubelet          Node functional-262391 status is now: NodeReady
	  Warning  ContainerGCFailed        4m33s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m11s                  node-controller  Node functional-262391 event: Registered Node functional-262391 in Controller
	  Normal   Starting                 3m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node functional-262391 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node functional-262391 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m38s (x8 over 3m38s)  kubelet          Node functional-262391 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m21s                  node-controller  Node functional-262391 event: Registered Node functional-262391 in Controller
	
	
	==> dmesg <==
	[  +0.001114] FS-Cache: O-key=[8] '635f3b0000000000'
	[  +0.000763] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001031] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=000000000db3c1b7
	[  +0.001157] FS-Cache: N-key=[8] '635f3b0000000000'
	[  +0.002874] FS-Cache: Duplicate cookie detected
	[  +0.000764] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001117] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=000000007ac7c815
	[  +0.001084] FS-Cache: O-key=[8] '635f3b0000000000'
	[  +0.000742] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001038] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000328509c1
	[  +0.001125] FS-Cache: N-key=[8] '635f3b0000000000'
	[  +2.220713] FS-Cache: Duplicate cookie detected
	[  +0.000749] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001122] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=00000000ebeba0e0
	[  +0.001200] FS-Cache: O-key=[8] '615f3b0000000000'
	[  +0.000765] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=000000008353ea7f
	[  +0.001072] FS-Cache: N-key=[8] '615f3b0000000000'
	[  +0.309997] FS-Cache: Duplicate cookie detected
	[  +0.000749] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001114] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=00000000e02b88cc
	[  +0.001198] FS-Cache: O-key=[8] '695f3b0000000000'
	[  +0.000739] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001020] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=000000000db3c1b7
	[  +0.001131] FS-Cache: N-key=[8] '695f3b0000000000'
	
	
	==> etcd [07811118ec726184590369e131eb2abab397c3246388e1fecc22781dae65d731] <==
	{"level":"info","ts":"2023-12-26T22:00:48.081774Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-26T22:00:48.081952Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:00:48.084621Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:00:48.083184Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-26T22:00:48.094999Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-26T22:00:48.095225Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-26T22:00:48.107302Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-26T22:00:48.107448Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-26T22:00:48.107564Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-26T22:00:48.108319Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-26T22:00:48.10841Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-26T22:00:49.760552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2023-12-26T22:00:49.760671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2023-12-26T22:00:49.760719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-12-26T22:00:49.760759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2023-12-26T22:00:49.760799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-12-26T22:00:49.760842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2023-12-26T22:00:49.760889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-12-26T22:00:49.768749Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-262391 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-26T22:00:49.768879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T22:00:49.769116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T22:00:49.781051Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-26T22:00:49.792376Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-26T22:00:49.795458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-26T22:00:49.795495Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [966133bd698f243db888d7ef30b4d6b869951a41045f90da8aad478517af91f1] <==
	{"level":"info","ts":"2023-12-26T21:59:58.700728Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-26T21:59:59.656577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-26T21:59:59.656625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-26T21:59:59.656646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-26T21:59:59.656673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2023-12-26T21:59:59.656684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-12-26T21:59:59.656696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2023-12-26T21:59:59.656713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-12-26T21:59:59.665828Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-262391 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-26T21:59:59.665876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T21:59:59.666068Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-26T21:59:59.666131Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-26T21:59:59.66618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T21:59:59.666967Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-26T21:59:59.668811Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-26T22:00:30.346229Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-26T22:00:30.346297Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-262391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2023-12-26T22:00:30.346388Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-26T22:00:30.346492Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-26T22:00:30.453048Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-26T22:00:30.453103Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-26T22:00:30.453164Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-12-26T22:00:30.455643Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-26T22:00:30.455764Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-26T22:00:30.455836Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-262391","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:04:25 up  5:46,  0 users,  load average: 0.22, 0.73, 1.11
	Linux functional-262391 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [1df6c7623610bbf8c6b273c1740811d523e0b388e3b58285ccb585dec2832714] <==
	I1226 22:02:23.136856       1 main.go:227] handling current node
	I1226 22:02:33.146037       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:02:33.146068       1 main.go:227] handling current node
	I1226 22:02:43.150872       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:02:43.150974       1 main.go:227] handling current node
	I1226 22:02:53.154534       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:02:53.154566       1 main.go:227] handling current node
	I1226 22:03:03.164930       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:03:03.164959       1 main.go:227] handling current node
	I1226 22:03:13.169435       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:03:13.169466       1 main.go:227] handling current node
	I1226 22:03:23.180839       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:03:23.180939       1 main.go:227] handling current node
	I1226 22:03:33.185631       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:03:33.185657       1 main.go:227] handling current node
	I1226 22:03:43.196067       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:03:43.196097       1 main.go:227] handling current node
	I1226 22:03:53.200206       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:03:53.200236       1 main.go:227] handling current node
	I1226 22:04:03.212932       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:04:03.212964       1 main.go:227] handling current node
	I1226 22:04:13.224504       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:04:13.224562       1 main.go:227] handling current node
	I1226 22:04:23.232653       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:04:23.232689       1 main.go:227] handling current node
	
	
	==> kindnet [e26ee425462b716831b83e20c569f4e514e2fa4c6413ef4056d6e78bd6facc52] <==
	I1226 21:59:58.372081       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1226 21:59:58.372349       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1226 21:59:58.372950       1 main.go:116] setting mtu 1500 for CNI 
	I1226 21:59:58.372998       1 main.go:146] kindnetd IP family: "ipv4"
	I1226 21:59:58.373048       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1226 22:00:02.954600       1 main.go:191] Failed to get nodes, retrying after error: nodes is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "nodes" in API group "" at the cluster scope
	I1226 22:00:02.984743       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:00:02.984854       1 main.go:227] handling current node
	I1226 22:00:13.002693       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:00:13.002728       1 main.go:227] handling current node
	I1226 22:00:23.016215       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:00:23.016363       1 main.go:227] handling current node
	
	
	==> kube-apiserver [bb588759e509222c7f1078424a8f078415da95e47ab661f0dc3d09afabecff7f] <==
	I1226 22:00:51.782495       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1226 22:00:51.989128       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1226 22:00:51.989170       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1226 22:00:51.989176       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1226 22:00:51.989407       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1226 22:00:51.989678       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1226 22:00:51.992399       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1226 22:00:51.992508       1 aggregator.go:166] initial CRD sync complete...
	I1226 22:00:51.992588       1 autoregister_controller.go:141] Starting autoregister controller
	I1226 22:00:51.992618       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1226 22:00:51.992661       1 cache.go:39] Caches are synced for autoregister controller
	I1226 22:00:51.995887       1 shared_informer.go:318] Caches are synced for configmaps
	I1226 22:00:52.002022       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1226 22:00:52.007913       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1226 22:00:52.038705       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1226 22:00:52.718176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1226 22:00:54.005370       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1226 22:00:54.157230       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1226 22:00:54.167103       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1226 22:00:54.231141       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1226 22:00:54.238992       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1226 22:01:10.192830       1 controller.go:624] quota admission added evaluator for: endpoints
	I1226 22:01:10.337110       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.77.75"}
	I1226 22:01:10.359103       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1226 22:01:17.039859       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.144.81"}
	
	
	==> kube-controller-manager [1ef2a1a3e47ebe5b2a04ac354c6ce5c027694d64a2536f75cc3431a8e1535ae8] <==
	I1226 22:00:14.974449       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1226 22:00:14.974518       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1226 22:00:14.974557       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1226 22:00:14.973852       1 shared_informer.go:318] Caches are synced for GC
	I1226 22:00:14.976513       1 shared_informer.go:318] Caches are synced for namespace
	I1226 22:00:14.976597       1 shared_informer.go:318] Caches are synced for job
	I1226 22:00:14.978614       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1226 22:00:14.980508       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1226 22:00:14.981876       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1226 22:00:14.982760       1 shared_informer.go:318] Caches are synced for PVC protection
	I1226 22:00:14.985455       1 shared_informer.go:318] Caches are synced for daemon sets
	I1226 22:00:14.996674       1 shared_informer.go:318] Caches are synced for HPA
	I1226 22:00:14.997307       1 shared_informer.go:318] Caches are synced for service account
	I1226 22:00:14.997394       1 shared_informer.go:318] Caches are synced for ephemeral
	I1226 22:00:15.013664       1 shared_informer.go:318] Caches are synced for disruption
	I1226 22:00:15.018059       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1226 22:00:15.076300       1 shared_informer.go:318] Caches are synced for resource quota
	I1226 22:00:15.120907       1 shared_informer.go:318] Caches are synced for resource quota
	I1226 22:00:15.132513       1 shared_informer.go:318] Caches are synced for attach detach
	I1226 22:00:15.505453       1 shared_informer.go:318] Caches are synced for garbage collector
	I1226 22:00:15.505485       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1226 22:00:15.520800       1 shared_informer.go:318] Caches are synced for garbage collector
	I1226 22:00:18.061962       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.962µs"
	I1226 22:00:18.085084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.748274ms"
	I1226 22:00:18.085172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.666µs"
	
	
	==> kube-controller-manager [e1ece719ec1a10c589149010507e7de5c8094e4c14f55d6fd4584a7566a63b25] <==
	I1226 22:01:04.771016       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1226 22:01:04.771473       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1226 22:01:04.770198       1 shared_informer.go:318] Caches are synced for service account
	I1226 22:01:04.771707       1 event.go:307] "Event occurred" object="functional-262391" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-262391 event: Registered Node functional-262391 in Controller"
	I1226 22:01:04.773223       1 shared_informer.go:318] Caches are synced for cronjob
	I1226 22:01:04.774573       1 shared_informer.go:318] Caches are synced for persistent volume
	I1226 22:01:04.777953       1 shared_informer.go:318] Caches are synced for ephemeral
	I1226 22:01:04.821041       1 shared_informer.go:318] Caches are synced for GC
	I1226 22:01:04.821061       1 shared_informer.go:318] Caches are synced for daemon sets
	I1226 22:01:04.821081       1 shared_informer.go:318] Caches are synced for HPA
	I1226 22:01:04.823559       1 shared_informer.go:318] Caches are synced for job
	I1226 22:01:04.835970       1 shared_informer.go:318] Caches are synced for endpoint
	I1226 22:01:04.844142       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1226 22:01:04.844345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.403µs"
	I1226 22:01:04.851551       1 shared_informer.go:318] Caches are synced for PVC protection
	I1226 22:01:04.861935       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1226 22:01:04.864694       1 shared_informer.go:318] Caches are synced for resource quota
	I1226 22:01:04.873289       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1226 22:01:04.877638       1 shared_informer.go:318] Caches are synced for resource quota
	I1226 22:01:04.884822       1 shared_informer.go:318] Caches are synced for disruption
	I1226 22:01:04.895080       1 shared_informer.go:318] Caches are synced for attach detach
	I1226 22:01:05.305251       1 shared_informer.go:318] Caches are synced for garbage collector
	I1226 22:01:05.312450       1 shared_informer.go:318] Caches are synced for garbage collector
	I1226 22:01:05.312485       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1226 22:01:23.316988       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [d2c7b8ff0a10f748682604cace54439cecb66a17767ffdd7df7f8e96de842b2e] <==
	I1226 22:00:52.725024       1 server_others.go:69] "Using iptables proxy"
	I1226 22:00:52.751556       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1226 22:00:52.811655       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1226 22:00:52.815551       1 server_others.go:152] "Using iptables Proxier"
	I1226 22:00:52.815623       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1226 22:00:52.815632       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1226 22:00:52.815666       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 22:00:52.816095       1 server.go:846] "Version info" version="v1.28.4"
	I1226 22:00:52.816115       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 22:00:52.817510       1 config.go:188] "Starting service config controller"
	I1226 22:00:52.817534       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 22:00:52.817553       1 config.go:97] "Starting endpoint slice config controller"
	I1226 22:00:52.817557       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 22:00:52.817964       1 config.go:315] "Starting node config controller"
	I1226 22:00:52.817981       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 22:00:52.918637       1 shared_informer.go:318] Caches are synced for node config
	I1226 22:00:52.918671       1 shared_informer.go:318] Caches are synced for service config
	I1226 22:00:52.918724       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d8f430ff9b1cde2991cdb177244c7ccd14d87e4017a42ed0df1ec6b53cbd6242] <==
	I1226 22:00:01.318569       1 server_others.go:69] "Using iptables proxy"
	I1226 22:00:03.050894       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1226 22:00:03.110300       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1226 22:00:03.116719       1 server_others.go:152] "Using iptables Proxier"
	I1226 22:00:03.116866       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1226 22:00:03.116910       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1226 22:00:03.117029       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 22:00:03.117340       1 server.go:846] "Version info" version="v1.28.4"
	I1226 22:00:03.117617       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 22:00:03.119434       1 config.go:188] "Starting service config controller"
	I1226 22:00:03.119556       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 22:00:03.119614       1 config.go:97] "Starting endpoint slice config controller"
	I1226 22:00:03.119649       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 22:00:03.120195       1 config.go:315] "Starting node config controller"
	I1226 22:00:03.120260       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 22:00:03.220989       1 shared_informer.go:318] Caches are synced for node config
	I1226 22:00:03.221031       1 shared_informer.go:318] Caches are synced for service config
	I1226 22:00:03.221068       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0d6b996118725bf2ffc61c626851a3807df1a7c2c242876e22173b8ef119c623] <==
	I1226 22:00:01.995786       1 serving.go:348] Generated self-signed cert in-memory
	I1226 22:00:03.301095       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1226 22:00:03.301211       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 22:00:03.311079       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1226 22:00:03.311302       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1226 22:00:03.311350       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1226 22:00:03.311394       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1226 22:00:03.320075       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1226 22:00:03.320296       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 22:00:03.320680       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1226 22:00:03.320991       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1226 22:00:03.412392       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1226 22:00:03.421000       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 22:00:03.421147       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1226 22:00:30.344734       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1226 22:00:30.345539       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1226 22:00:30.345780       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4dcd4a3ad372e91c67360adb38c85ffb52ff6ba6a94492c75a7cfc9d92935baa] <==
	I1226 22:00:49.322439       1 serving.go:348] Generated self-signed cert in-memory
	W1226 22:00:51.900951       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1226 22:00:51.901057       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1226 22:00:51.901093       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1226 22:00:51.901142       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1226 22:00:51.969047       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1226 22:00:51.969150       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 22:00:51.972071       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1226 22:00:51.972640       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1226 22:00:51.973103       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 22:00:51.972670       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1226 22:00:52.073693       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 26 22:03:33 functional-262391 kubelet[4675]: E1226 22:03:33.023471    4675 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="265d1a5f-6c1b-41a9-830c-235ee9d55ffe"
	Dec 26 22:03:44 functional-262391 kubelet[4675]: E1226 22:03:44.178334    4675 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="265d1a5f-6c1b-41a9-830c-235ee9d55ffe"
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.306065    4675 manager.go:1106] Failed to create existing container: /crio-94929afe0d0c23b274bad43180080edef96f59530ce4bff50ee74bb6edd6d636: Error finding container 94929afe0d0c23b274bad43180080edef96f59530ce4bff50ee74bb6edd6d636: Status 404 returned error can't find the container with id 94929afe0d0c23b274bad43180080edef96f59530ce4bff50ee74bb6edd6d636
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.306696    4675 manager.go:1106] Failed to create existing container: /docker/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/crio-fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5: Error finding container fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5: Status 404 returned error can't find the container with id fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.306987    4675 manager.go:1106] Failed to create existing container: /crio-ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c: Error finding container ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c: Status 404 returned error can't find the container with id ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.307272    4675 manager.go:1106] Failed to create existing container: /docker/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/crio-94929afe0d0c23b274bad43180080edef96f59530ce4bff50ee74bb6edd6d636: Error finding container 94929afe0d0c23b274bad43180080edef96f59530ce4bff50ee74bb6edd6d636: Status 404 returned error can't find the container with id 94929afe0d0c23b274bad43180080edef96f59530ce4bff50ee74bb6edd6d636
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.307478    4675 manager.go:1106] Failed to create existing container: /crio-366e6735458fedfa28cfa70967455c9882b1068bc47fbc77eb10a3c732b36f54: Error finding container 366e6735458fedfa28cfa70967455c9882b1068bc47fbc77eb10a3c732b36f54: Status 404 returned error can't find the container with id 366e6735458fedfa28cfa70967455c9882b1068bc47fbc77eb10a3c732b36f54
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.307688    4675 manager.go:1106] Failed to create existing container: /docker/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/crio-ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c: Error finding container ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c: Status 404 returned error can't find the container with id ced93628785c08737c21e870f51dfccb3a72bf75cbb625a905b814cdb39d300c
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.307879    4675 manager.go:1106] Failed to create existing container: /crio-ac3b0cf8e4b5b312e960910cb2b32e65a942cf16f1fac38ba8edfbddf7c7793d: Error finding container ac3b0cf8e4b5b312e960910cb2b32e65a942cf16f1fac38ba8edfbddf7c7793d: Status 404 returned error can't find the container with id ac3b0cf8e4b5b312e960910cb2b32e65a942cf16f1fac38ba8edfbddf7c7793d
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.308063    4675 manager.go:1106] Failed to create existing container: /docker/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/crio-04fe9bb0de4db7857fe6aa33597481a9ee720ad67bece900c5b70753748a4820: Error finding container 04fe9bb0de4db7857fe6aa33597481a9ee720ad67bece900c5b70753748a4820: Status 404 returned error can't find the container with id 04fe9bb0de4db7857fe6aa33597481a9ee720ad67bece900c5b70753748a4820
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.308365    4675 manager.go:1106] Failed to create existing container: /docker/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/crio-995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8: Error finding container 995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8: Status 404 returned error can't find the container with id 995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.308626    4675 manager.go:1106] Failed to create existing container: /docker/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/crio-cb35f2f6fa9bf27026f38376801e38eea4561a9f629ff5339aa0bf5d18c1139c: Error finding container cb35f2f6fa9bf27026f38376801e38eea4561a9f629ff5339aa0bf5d18c1139c: Status 404 returned error can't find the container with id cb35f2f6fa9bf27026f38376801e38eea4561a9f629ff5339aa0bf5d18c1139c
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.308929    4675 manager.go:1106] Failed to create existing container: /crio-d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013: Error finding container d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013: Status 404 returned error can't find the container with id d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.309206    4675 manager.go:1106] Failed to create existing container: /crio-cb35f2f6fa9bf27026f38376801e38eea4561a9f629ff5339aa0bf5d18c1139c: Error finding container cb35f2f6fa9bf27026f38376801e38eea4561a9f629ff5339aa0bf5d18c1139c: Status 404 returned error can't find the container with id cb35f2f6fa9bf27026f38376801e38eea4561a9f629ff5339aa0bf5d18c1139c
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.309433    4675 manager.go:1106] Failed to create existing container: /docker/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/crio-ac3b0cf8e4b5b312e960910cb2b32e65a942cf16f1fac38ba8edfbddf7c7793d: Error finding container ac3b0cf8e4b5b312e960910cb2b32e65a942cf16f1fac38ba8edfbddf7c7793d: Status 404 returned error can't find the container with id ac3b0cf8e4b5b312e960910cb2b32e65a942cf16f1fac38ba8edfbddf7c7793d
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.309659    4675 manager.go:1106] Failed to create existing container: /docker/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/crio-366e6735458fedfa28cfa70967455c9882b1068bc47fbc77eb10a3c732b36f54: Error finding container 366e6735458fedfa28cfa70967455c9882b1068bc47fbc77eb10a3c732b36f54: Status 404 returned error can't find the container with id 366e6735458fedfa28cfa70967455c9882b1068bc47fbc77eb10a3c732b36f54
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.309829    4675 manager.go:1106] Failed to create existing container: /crio-995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8: Error finding container 995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8: Status 404 returned error can't find the container with id 995e88ed74fef62deb6947fa0d06332dc498407081e6e8b35c83661d3d6e0dc8
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.310037    4675 manager.go:1106] Failed to create existing container: /docker/0f329dae541ede1670d5c7556e5f4b260ee3b6f71728967a170153f52271f2cf/crio-d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013: Error finding container d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013: Status 404 returned error can't find the container with id d60ff4eb1f096d6d9c9798e18fcd6ea0f7621bfd4c074fe40f3e5b10c9766013
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.310290    4675 manager.go:1106] Failed to create existing container: /crio-fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5: Error finding container fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5: Status 404 returned error can't find the container with id fef827a05791974659b0b84ebbfb174647c84037c272f8d37e49dd4e0a7f2eb5
	Dec 26 22:03:47 functional-262391 kubelet[4675]: E1226 22:03:47.310471    4675 manager.go:1106] Failed to create existing container: /crio-04fe9bb0de4db7857fe6aa33597481a9ee720ad67bece900c5b70753748a4820: Error finding container 04fe9bb0de4db7857fe6aa33597481a9ee720ad67bece900c5b70753748a4820: Status 404 returned error can't find the container with id 04fe9bb0de4db7857fe6aa33597481a9ee720ad67bece900c5b70753748a4820
	Dec 26 22:04:03 functional-262391 kubelet[4675]: E1226 22:04:03.302077    4675 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 26 22:04:03 functional-262391 kubelet[4675]: E1226 22:04:03.302133    4675 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 26 22:04:03 functional-262391 kubelet[4675]: E1226 22:04:03.302365    4675 kuberuntime_manager.go:1261] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75hs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(8b3bd6d9-b5bc-4a9b-9342-a27341ae613
0): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:04:03 functional-262391 kubelet[4675]: E1226 22:04:03.302408    4675 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="8b3bd6d9-b5bc-4a9b-9342-a27341ae6130"
	Dec 26 22:04:18 functional-262391 kubelet[4675]: E1226 22:04:18.178312    4675 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="8b3bd6d9-b5bc-4a9b-9342-a27341ae6130"
	
	
	==> storage-provisioner [312553732c0c5f3bb72b7c7cbbb9bf51f96775ad4a09c51ca62e4d6d2a24c91b] <==
	I1226 22:00:52.728575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1226 22:00:52.790110       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1226 22:00:52.790273       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1226 22:01:10.199656       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1226 22:01:10.199901       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-262391_775a6439-92fa-4604-8739-d60b7d78fd26!
	I1226 22:01:10.200860       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a200dfb9-8de7-4759-be4f-3372cc14fa73", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-262391_775a6439-92fa-4604-8739-d60b7d78fd26 became leader
	I1226 22:01:10.301054       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-262391_775a6439-92fa-4604-8739-d60b7d78fd26!
	I1226 22:01:23.314604       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1226 22:01:23.314663       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e7439051-2e3f-4605-b293-6fc725214ae8 405 0 2023-12-26 21:59:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-12-26 21:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-c0347271-c017-4b68-ad50-b7f9527c0b76 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  c0347271-c017-4b68-ad50-b7f9527c0b76 708 0 2023-12-26 22:01:23 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-12-26 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-12-26 22:01:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1226 22:01:23.317582       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-c0347271-c017-4b68-ad50-b7f9527c0b76" provisioned
	I1226 22:01:23.317607       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1226 22:01:23.317614       1 volume_store.go:212] Trying to save persistentvolume "pvc-c0347271-c017-4b68-ad50-b7f9527c0b76"
	I1226 22:01:23.318587       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c0347271-c017-4b68-ad50-b7f9527c0b76", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1226 22:01:23.343648       1 volume_store.go:219] persistentvolume "pvc-c0347271-c017-4b68-ad50-b7f9527c0b76" saved
	I1226 22:01:23.343747       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c0347271-c017-4b68-ad50-b7f9527c0b76", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c0347271-c017-4b68-ad50-b7f9527c0b76
	
	
	==> storage-provisioner [a34d8d5ed9ea4f1f416f5d388b6188b40b9af4f3a7333580289b2408beeb1b77] <==
	I1226 21:59:59.075528       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1226 22:00:03.061761       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1226 22:00:03.061918       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1226 22:00:20.478235       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1226 22:00:20.479308       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-262391_41c437c0-f9a2-44bc-b491-21885d0fe963!
	I1226 22:00:20.481227       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a200dfb9-8de7-4759-be4f-3372cc14fa73", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-262391_41c437c0-f9a2-44bc-b491-21885d0fe963 became leader
	I1226 22:00:20.579865       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-262391_41c437c0-f9a2-44bc-b491-21885d0fe963!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-262391 -n functional-262391
helpers_test.go:261: (dbg) Run:  kubectl --context functional-262391 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-262391 describe pod nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-262391 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-262391/192.168.49.2
	Start Time:       Tue, 26 Dec 2023 22:01:17 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-75hs6 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-75hs6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m10s                default-scheduler  Successfully assigned default/nginx-svc to functional-262391
	  Warning  Failed     2m40s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     85s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    55s (x3 over 3m10s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     24s (x3 over 2m40s)  kubelet            Error: ErrImagePull
	  Warning  Failed     24s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x3 over 2m39s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     9s (x3 over 2m39s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-262391/192.168.49.2
	Start Time:       Tue, 26 Dec 2023 22:01:23 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4hc7k (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-4hc7k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m4s                default-scheduler  Successfully assigned default/sp-pod to functional-262391
	  Warning  Failed     54s (x2 over 2m9s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     54s (x2 over 2m9s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    43s (x2 over 2m9s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     43s (x2 over 2m9s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    31s (x3 over 3m4s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (190.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-262391 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8b3bd6d9-b5bc-4a9b-9342-a27341ae6130] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-262391 -n functional-262391
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2023-12-26 22:05:17.426722592 +0000 UTC m=+1238.079697447
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-262391 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-262391 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-262391/192.168.49.2
Start Time:       Tue, 26 Dec 2023 22:01:17 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-75hs6 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-75hs6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-262391
Warning  Failed     3m30s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m15s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     74s (x3 over 3m30s)  kubelet            Error: ErrImagePull
Warning  Failed     74s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    47s (x4 over 3m29s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     47s (x4 over 3m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    32s (x4 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-262391 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-262391 logs nginx-svc -n default: exit status 1 (93.760062ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-262391 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-262391 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.99.144.81   10.99.144.81   80:31358/TCP   5m34s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (94.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-324559 addons enable ingress --alsologtostderr -v=5
E1226 22:11:16.631306  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:16.636605  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:16.646877  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:16.667193  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:16.707487  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:16.787826  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:16.948221  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:17.268591  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:17.908839  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:19.189639  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:21.750782  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:26.871842  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:37.112005  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:11:57.592892  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:12:38.553215  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:13:11.961035  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 22:14:00.474430  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-324559 addons enable ingress --alsologtostderr -v=5: exit status 10 (6m1.253265736s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:08:21.807286  733646 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:08:21.808324  733646 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:08:21.808339  733646 out.go:309] Setting ErrFile to fd 2...
	I1226 22:08:21.808346  733646 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:08:21.808715  733646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:08:21.809202  733646 mustload.go:65] Loading cluster: ingress-addon-legacy-324559
	I1226 22:08:21.809703  733646 config.go:182] Loaded profile config "ingress-addon-legacy-324559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1226 22:08:21.809752  733646 addons.go:600] checking whether the cluster is paused
	I1226 22:08:21.809908  733646 config.go:182] Loaded profile config "ingress-addon-legacy-324559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1226 22:08:21.809925  733646 host.go:66] Checking if "ingress-addon-legacy-324559" exists ...
	I1226 22:08:21.810549  733646 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:08:21.831382  733646 ssh_runner.go:195] Run: systemctl --version
	I1226 22:08:21.831446  733646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:08:21.855967  733646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:08:21.970490  733646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 22:08:21.970563  733646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 22:08:22.027023  733646 cri.go:89] found id: "02f33002d9479a64055fccd43ef1ca7ab676214fbd5ccf695f09d9e759813c6e"
	I1226 22:08:22.027049  733646 cri.go:89] found id: "dce0f84a819509c98f957f9b06142244dd890242592ef8778d73ac98742e2356"
	I1226 22:08:22.027055  733646 cri.go:89] found id: "36c3a5e7fc3b0b3e62d89ddc70be43b8929f62a2440886ceded856e6e6596020"
	I1226 22:08:22.027060  733646 cri.go:89] found id: "beac61d1ace3e9fcddf8defb7ffd81bc410cf8d57adc9293474065e9908c9ed9"
	I1226 22:08:22.027064  733646 cri.go:89] found id: "46e6d02e3c5745545bfd24ad3504b526b69d1d83ce3073bf30c82b94071ba620"
	I1226 22:08:22.027069  733646 cri.go:89] found id: "575c4b5034ded1ed2f54ae4bccbe637a9d78408e528f471d7105f50193c84be5"
	I1226 22:08:22.027073  733646 cri.go:89] found id: "e9b1d6041f823f638d3ff0bcb0d2fd195521e835aa1beea773b245095f9bb10a"
	I1226 22:08:22.027080  733646 cri.go:89] found id: ""
	I1226 22:08:22.027138  733646 ssh_runner.go:195] Run: sudo runc list -f json
	I1226 22:08:22.062967  733646 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"02f33002d9479a64055fccd43ef1ca7ab676214fbd5ccf695f09d9e759813c6e","pid":2218,"status":"running","bundle":"/run/containers/storage/overlay-containers/02f33002d9479a64055fccd43ef1ca7ab676214fbd5ccf695f09d9e759813c6e/userdata","rootfs":"/var/lib/containers/storage/overlay/f85de49f957a743c83624d5e4554147fddaf080dab650d4aba4d730d5b3ff09f/merged","created":"2023-12-26T22:08:18.520442459Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"81a173bd","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.c
ri-o.Annotations":"{\"io.kubernetes.container.hash\":\"81a173bd\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"02f33002d9479a64055fccd43ef1ca7ab676214fbd5ccf695f09d9e759813c6e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T22:08:18.489799881Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.7","io.kubernetes.cri-o
.ImageRef":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bff467f8-lsmfr\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"57d86f7d-5932-4ab4-ab83-a9ffd33cbc12\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bff467f8-lsmfr_57d86f7d-5932-4ab4-ab83-a9ffd33cbc12/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f85de49f957a743c83624d5e4554147fddaf080dab650d4aba4d730d5b3ff09f/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bff467f8-lsmfr_kube-system_57d86f7d-5932-4ab4-ab83-a9ffd33cbc12_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/930284210ecee5ed4c6588b9582e897b52523493f12269326f1ea0e43fa25df5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"930284210ecee5ed4c6588b9582e897b52523493f12269326f
1ea0e43fa25df5","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bff467f8-lsmfr_kube-system_57d86f7d-5932-4ab4-ab83-a9ffd33cbc12_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/57d86f7d-5932-4ab4-ab83-a9ffd33cbc12/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/57d86f7d-5932-4ab4-ab83-a9ffd33cbc12/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/57d86f7d-5932-4ab4-ab83-a9ffd33cbc12/containers/coredns/7b24a392\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\"
:\"/var/lib/kubelet/pods/57d86f7d-5932-4ab4-ab83-a9ffd33cbc12/volumes/kubernetes.io~secret/coredns-token-h68p8\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bff467f8-lsmfr","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"57d86f7d-5932-4ab4-ab83-a9ffd33cbc12","kubernetes.io/config.seen":"2023-12-26T22:08:18.135470183Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"36c3a5e7fc3b0b3e62d89ddc70be43b8929f62a2440886ceded856e6e6596020","pid":1945,"status":"running","bundle":"/run/containers/storage/overlay-containers/36c3a5e7fc3b0b3e62d89ddc70be43b8929f62a2440886ceded856e6e6596020/userdata","rootfs":"/var/lib/containers/storage/overlay/7d6fc88031a441b44b5eb53808e4091c641d2d22de7f9e4d7a82c5d9a4b442ce/merged","created":"2023-12-26T22:08:07.37939581Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"22c5ec75","io.kubernetes.conta
iner.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"22c5ec75\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"36c3a5e7fc3b0b3e62d89ddc70be43b8929f62a2440886ceded856e6e6596020","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T22:08:07.271252727Z","io.kubernetes.cri-o.Image":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.18.20","io.kubernetes.cri-o.ImageRef":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.Labels":"{\"io.
kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-nv5jt\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"98081056-1e5f-4ad8-bb67-7da69b2e48c3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-nv5jt_98081056-1e5f-4ad8-bb67-7da69b2e48c3/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7d6fc88031a441b44b5eb53808e4091c641d2d22de7f9e4d7a82c5d9a4b442ce/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-nv5jt_kube-system_98081056-1e5f-4ad8-bb67-7da69b2e48c3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f5370a56de2f827eb89f1a6a0c00be939cb7d35349dd21cc1c5cdffce9335461/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f5370a56de2f827eb89f1a6a0c00be939cb7d35349dd21cc1c5cdffce9335461","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-nv5jt_kube-system_98081056-1e5f-4ad8-bb67-7da69b2e48c3_0","i
o.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/98081056-1e5f-4ad8-bb67-7da69b2e48c3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/98081056-1e5f-4ad8-bb67-7da69b2e48c3/containers/kube-proxy/f4bb82c3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/98081056-1e5f-4ad8-bb67-7da69b2e48c3/volumes/kubernetes.io~configmap/kube-proxy\",
\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/98081056-1e5f-4ad8-bb67-7da69b2e48c3/volumes/kubernetes.io~secret/kube-proxy-token-nfqxn\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-nv5jt","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"98081056-1e5f-4ad8-bb67-7da69b2e48c3","kubernetes.io/config.seen":"2023-12-26T22:08:06.602276512Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46e6d02e3c5745545bfd24ad3504b526b69d1d83ce3073bf30c82b94071ba620","pid":1493,"status":"running","bundle":"/run/containers/storage/overlay-containers/46e6d02e3c5745545bfd24ad3504b526b69d1d83ce3073bf30c82b94071ba620/userdata","rootfs":"/var/lib/containers/storage/overlay/188b6d988e17c8eeed5e035593688a627f67eb034bec77398840fc98582b26db/merged","created"
:"2023-12-26T22:07:42.15641048Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ce880c0b","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ce880c0b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"46e6d02e3c5745545bfd24ad3504b526b69d1d83ce3073bf30c82b94071ba620","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T22:07:42.071531806Z","io.kubernetes.cri-o.Image":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/k
ube-controller-manager:v1.18.20","io.kubernetes.cri-o.ImageRef":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ingress-addon-legacy-324559\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"49b043cd68fd30a453bdf128db5271f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ingress-addon-legacy-324559_49b043cd68fd30a453bdf128db5271f3/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/188b6d988e17c8eeed5e035593688a627f67eb034bec77398840fc98582b26db/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-324559_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/ove
rlay-containers/89cbf22e5fbc639926be3ef576cb26c8d50c75151c3f112a6c5799cf097678ef/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"89cbf22e5fbc639926be3ef576cb26c8d50c75151c3f112a6c5799cf097678ef","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ingress-addon-legacy-324559_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/containers/kube-controller-manager/765c8de0\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/etc-hosts\",\"read
only\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":fals
e,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ingress-addon-legacy-324559","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.hash":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.seen":"2023-12-26T22:07:37.913046450Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"575c4b5034ded1ed2f54ae4bccbe637a9d78408e528f471d7105f50193c84be5","pid":1473,"status":"running","bundle":"/run/containers/storage/overlay-containers/575c4b5034ded1ed2f54ae4bccbe637a9d78408e528f471d7105f50193c84be5/userdata","rootfs":"/var/lib/containers/storage/overlay/aa6af0479708d4a427e60f20ecce7388eb978083360e19ed13ef5f804f61972f/merged","created":"2023-12-26T22:07:42.102054379Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fd1dd8ff","io.kubernetes.container.name":"kube-apiserver","io.kubernete
s.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fd1dd8ff\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"575c4b5034ded1ed2f54ae4bccbe637a9d78408e528f471d7105f50193c84be5","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T22:07:42.018336597Z","io.kubernetes.cri-o.Image":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.18.20","io.kubernetes.cri-o.ImageRef":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kub
e-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ingress-addon-legacy-324559\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"78b40af95c64e5112ac985f00b18628c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ingress-addon-legacy-324559_78b40af95c64e5112ac985f00b18628c/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aa6af0479708d4a427e60f20ecce7388eb978083360e19ed13ef5f804f61972f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-324559_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c996e9805171ba6a60c38c62fff4f02dfcbb5f902fdb1c05f9ecea41d0980ab7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c996e9805171ba6a60c38c62fff4f02dfcbb5f902fdb1c05f9ecea41d0980ab7","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ingress
-addon-legacy-324559_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/containers/kube-apiserver/0e26cf4c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/
certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ingress-addon-legacy-324559","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"78b40af95c64e5112ac985f00b18628c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"78b40af95c64e5112ac985f00b18628c","kubernetes.io/config.seen":"2023-12-26T22:07:37.904986631Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"beac61d1ace3e9fcddf8defb7ffd81bc410cf8d57adc9293474065e9908c9ed9","pid":1522,"status":"running","bundle":"/run
/containers/storage/overlay-containers/beac61d1ace3e9fcddf8defb7ffd81bc410cf8d57adc9293474065e9908c9ed9/userdata","rootfs":"/var/lib/containers/storage/overlay/b9db1a7b79594c5685ae393d589ebc5540b262948097e722f414c6ca3aa74f92/merged","created":"2023-12-26T22:07:42.229804372Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef5ef709","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5ef709\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"beac61d1ace3e9fcddf8defb7ffd81bc410cf8d57adc9293474065e9908c9ed9","io.kubernetes.cr
i-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T22:07:42.114949763Z","io.kubernetes.cri-o.Image":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.18.20","io.kubernetes.cri-o.ImageRef":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ingress-addon-legacy-324559\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d12e497b0008e22acbcd5a9cf2dd48ac\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ingress-addon-legacy-324559_d12e497b0008e22acbcd5a9cf2dd48ac/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b9db1a7b79594c5685ae393d589ebc5540b262948097e722f414c6ca3aa74f92/merged","io.kubernetes.cri-o.Name":"k8s_kube-
scheduler_kube-scheduler-ingress-addon-legacy-324559_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f51e7e7a68e9c7fad3dd768a7d7a3c4eaa0ac76b3a9d2f5eb094c1fdbe459d23/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f51e7e7a68e9c7fad3dd768a7d7a3c4eaa0ac76b3a9d2f5eb094c1fdbe459d23","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ingress-addon-legacy-324559_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/containers/kube-scheduler/57b5d99b\",\"readonl
y\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ingress-addon-legacy-324559","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.hash":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.seen":"2023-12-26T22:07:37.915265739Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dce0f84a819509c98f957f9b06142244dd890242592ef8778d73ac98742e2356","pid":2111,"status":"running","bundle":"/run/containers/storage/overlay-containers/dce0f84a819509c98f957f9b06142244dd890242592ef8778d73ac98742e2356/userdata","rootfs":"/var/lib/containers/storage/overlay/b29bbfa35cdc8999892adceef8d807d0b1cf385edc8adb21a91e689b1e50b5c0/merged","created":"2023-12-26T22:0
8:09.264974084Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4a4c1276","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4a4c1276\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"dce0f84a819509c98f957f9b06142244dd890242592ef8778d73ac98742e2356","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T22:08:09.227409302Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker
.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-xp2bf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"53d917f0-8851-4f9a-95bd-ecf62017fc1d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-xp2bf_53d917f0-8851-4f9a-95bd-ecf62017fc1d/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b29bbfa35cdc8999892adceef8d807d0b1cf385edc8adb21a91e689b1e50b5c0/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-xp2bf_kube-system_53d917f0-8851-4f9a-95bd-ecf62017fc1d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1fe757aa81c2bd5633819bec43a7375b3a7eb2dab88754cf7907ca785f33403d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1fe757
aa81c2bd5633819bec43a7375b3a7eb2dab88754cf7907ca785f33403d","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-xp2bf_kube-system_53d917f0-8851-4f9a-95bd-ecf62017fc1d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/53d917f0-8851-4f9a-95bd-ecf62017fc1d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/53d917f0-8851-4f9a-95bd-ecf62017fc1d/containers/kindnet-cni/a358bc6e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":fal
se},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/53d917f0-8851-4f9a-95bd-ecf62017fc1d/volumes/kubernetes.io~secret/kindnet-token-nq5zs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-xp2bf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"53d917f0-8851-4f9a-95bd-ecf62017fc1d","kubernetes.io/config.seen":"2023-12-26T22:08:06.564124166Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9b1d6041f823f638d3ff0bcb0d2fd195521e835aa1beea773b245095f9bb10a","pid":1424,"status":"running","bundle":"/run/containers/storage/overlay-containers/e9b1d6041f823f638d3ff0bcb0d2fd195521e835aa1beea773b245095f9bb10a/userdata","rootfs":"/var/lib/containers/storage/overlay/dac66410764
73cdaaf77c5b64441ab94f8f985fbc7f34d64699ca82f5b8bb585/merged","created":"2023-12-26T22:07:41.962320957Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e60c3db0","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e60c3db0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e9b1d6041f823f638d3ff0bcb0d2fd195521e835aa1beea773b245095f9bb10a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-12-26T22:07:41.915187344Z","io.kubernetes.cri-o.Image":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271
404952","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ingress-addon-legacy-324559\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7cdb2614c29d30f821ee6db03ec3f37a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ingress-addon-legacy-324559_7cdb2614c29d30f821ee6db03ec3f37a/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dac6641076473cdaaf77c5b64441ab94f8f985fbc7f34d64699ca82f5b8bb585/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ingress-addon-legacy-324559_kube-system_7cdb2614c29d30f821ee6db03ec3f37a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a8e498d029d1c2f1032bc35e36b600ee3bc061541c3c2f60926affa960f18b08/userdata/resolv.con
f","io.kubernetes.cri-o.SandboxID":"a8e498d029d1c2f1032bc35e36b600ee3bc061541c3c2f60926affa960f18b08","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ingress-addon-legacy-324559_kube-system_7cdb2614c29d30f821ee6db03ec3f37a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7cdb2614c29d30f821ee6db03ec3f37a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7cdb2614c29d30f821ee6db03ec3f37a/containers/etcd/0700316c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/l
ib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ingress-addon-legacy-324559","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7cdb2614c29d30f821ee6db03ec3f37a","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"7cdb2614c29d30f821ee6db03ec3f37a","kubernetes.io/config.seen":"2023-12-26T22:07:37.916845822Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I1226 22:08:22.063526  733646 cri.go:126] list returned 7 containers
	I1226 22:08:22.063541  733646 cri.go:129] container: {ID:02f33002d9479a64055fccd43ef1ca7ab676214fbd5ccf695f09d9e759813c6e Status:running}
	I1226 22:08:22.063557  733646 cri.go:135] skipping {02f33002d9479a64055fccd43ef1ca7ab676214fbd5ccf695f09d9e759813c6e running}: state = "running", want "paused"
	I1226 22:08:22.063566  733646 cri.go:129] container: {ID:36c3a5e7fc3b0b3e62d89ddc70be43b8929f62a2440886ceded856e6e6596020 Status:running}
	I1226 22:08:22.063573  733646 cri.go:135] skipping {36c3a5e7fc3b0b3e62d89ddc70be43b8929f62a2440886ceded856e6e6596020 running}: state = "running", want "paused"
	I1226 22:08:22.063584  733646 cri.go:129] container: {ID:46e6d02e3c5745545bfd24ad3504b526b69d1d83ce3073bf30c82b94071ba620 Status:running}
	I1226 22:08:22.063596  733646 cri.go:135] skipping {46e6d02e3c5745545bfd24ad3504b526b69d1d83ce3073bf30c82b94071ba620 running}: state = "running", want "paused"
	I1226 22:08:22.063603  733646 cri.go:129] container: {ID:575c4b5034ded1ed2f54ae4bccbe637a9d78408e528f471d7105f50193c84be5 Status:running}
	I1226 22:08:22.063614  733646 cri.go:135] skipping {575c4b5034ded1ed2f54ae4bccbe637a9d78408e528f471d7105f50193c84be5 running}: state = "running", want "paused"
	I1226 22:08:22.063621  733646 cri.go:129] container: {ID:beac61d1ace3e9fcddf8defb7ffd81bc410cf8d57adc9293474065e9908c9ed9 Status:running}
	I1226 22:08:22.063631  733646 cri.go:135] skipping {beac61d1ace3e9fcddf8defb7ffd81bc410cf8d57adc9293474065e9908c9ed9 running}: state = "running", want "paused"
	I1226 22:08:22.063638  733646 cri.go:129] container: {ID:dce0f84a819509c98f957f9b06142244dd890242592ef8778d73ac98742e2356 Status:running}
	I1226 22:08:22.063649  733646 cri.go:135] skipping {dce0f84a819509c98f957f9b06142244dd890242592ef8778d73ac98742e2356 running}: state = "running", want "paused"
	I1226 22:08:22.063655  733646 cri.go:129] container: {ID:e9b1d6041f823f638d3ff0bcb0d2fd195521e835aa1beea773b245095f9bb10a Status:running}
	I1226 22:08:22.063662  733646 cri.go:135] skipping {e9b1d6041f823f638d3ff0bcb0d2fd195521e835aa1beea773b245095f9bb10a running}: state = "running", want "paused"
	I1226 22:08:22.067070  733646 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1226 22:08:22.069313  733646 config.go:182] Loaded profile config "ingress-addon-legacy-324559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1226 22:08:22.069373  733646 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-324559"
	I1226 22:08:22.069390  733646 addons.go:237] Setting addon ingress=true in "ingress-addon-legacy-324559"
	I1226 22:08:22.069424  733646 host.go:66] Checking if "ingress-addon-legacy-324559" exists ...
	I1226 22:08:22.069885  733646 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:08:22.097476  733646 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1226 22:08:22.099480  733646 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1226 22:08:22.101540  733646 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1226 22:08:22.104127  733646 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 22:08:22.104151  733646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1226 22:08:22.104226  733646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:08:22.126109  733646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:08:22.262254  733646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1226 22:08:22.953571  733646 addons.go:473] Verifying addon ingress=true in "ingress-addon-legacy-324559"
	I1226 22:08:22.955845  733646 out.go:177] * Verifying ingress addon...
	I1226 22:08:22.958358  733646 kapi.go:59] client config for ingress-addon-legacy-324559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:08:22.959208  733646 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 22:08:22.959311  733646 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1226 22:08:22.991349  733646 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1226 22:08:22.991440  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:23.464074  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:23.965040  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:24.463299  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:24.963945  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:25.464596  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:25.963783  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:26.464005  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:26.963825  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:27.463296  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:27.963360  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:28.464075  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:28.963284  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:29.463304  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:29.963837  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:30.463968  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:30.973647  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:31.463988  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:31.963770  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:32.464297  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:32.963658  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:33.464429  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:33.963812  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:34.464644  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:34.963621  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:35.464018  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:35.963172  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:36.463383  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:36.964192  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:37.463384  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:37.963675  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:38.464063  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:38.963826  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:39.463026  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:39.963216  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:40.463237  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:40.964847  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:41.463698  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:41.964392  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:42.464191  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:42.963450  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:43.463205  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:43.963612  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:44.463946  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:44.963374  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:45.463154  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:45.963426  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:46.464204  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:46.963637  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:47.464061  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:47.963735  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:48.464012  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:48.963212  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:49.463241  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:49.963541  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:50.463953  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:50.967611  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:51.463948  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:51.963670  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:52.464155  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:52.963674  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:53.463170  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:53.963063  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:54.463653  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:54.963581  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:55.464941  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:55.963384  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:56.464482  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:56.963992  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:57.463340  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:57.963897  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:58.463028  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:58.963280  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:59.463607  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:08:59.963588  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:00.464255  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:00.966614  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:01.463565  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:01.963385  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:02.463736  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:02.963824  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:03.463940  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:03.964293  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:04.463372  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:04.964611  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:05.464186  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:05.963030  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:06.463336  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:06.964225  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:07.463467  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:07.963548  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:08.463908  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:08.963279  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:09.463893  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:09.963217  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:10.464349  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:10.963813  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:11.463740  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:11.963639  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:12.464098  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:12.963492  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:13.463756  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:13.963674  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:14.464070  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:14.963296  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:15.463188  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:15.963744  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:16.463931  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:16.963553  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:17.464048  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:17.963445  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:18.464043  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:18.963207  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:19.463449  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:19.963873  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:20.463549  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:20.967445  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:21.464469  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:21.964230  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:22.463324  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:22.963703  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:23.464073  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:23.963503  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:24.464369  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:24.963597  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:25.463920  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:25.963084  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:26.463357  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:26.963816  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:27.463769  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:27.963685  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:28.464272  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:28.964352  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:29.464269  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:29.963663  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:30.464313  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:30.966789  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:31.463913  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:31.963404  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:32.463764  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:32.964064  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:33.463458  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:33.963732  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:34.464182  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:34.963485  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:35.463667  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:35.963591  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:36.464017  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:36.964898  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:37.464125  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:37.963436  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:38.463939  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:38.963586  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:39.464281  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:39.963582  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:40.464266  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:40.970806  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:41.463396  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:41.964024  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:42.463881  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:42.963231  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:43.463350  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:43.963875  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:44.463616  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:44.963946  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:45.464592  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:45.963706  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:46.464079  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:46.963461  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:47.463964  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:47.963179  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:48.463406  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:48.963628  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:49.463896  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:49.963841  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:50.463146  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:50.964792  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:51.463180  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:51.965521  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:52.464198  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:52.964079  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:53.463318  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:53.963845  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:54.463989  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:54.963207  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:55.465902  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:55.963417  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:56.463266  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:56.963664  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:57.464091  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:57.963525  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:58.463777  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:58.962989  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:59.464077  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:09:59.963349  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:00.464420  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:00.963666  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:01.464348  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:01.963769  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:02.464214  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:02.964024  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:03.463538  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:03.963641  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:04.464227  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:04.963253  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:05.463444  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:05.964197  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:06.463462  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:06.964049  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:07.463293  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:07.963929  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:08.463319  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:08.965011  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:09.463478  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:09.963513  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:10.464078  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:10.966806  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:11.463698  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:11.963765  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:12.463336  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:12.963591  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:13.464040  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:13.963273  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:14.463339  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:14.963374  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:15.463393  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:15.963344  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:16.463723  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:16.963408  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:17.463936  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:17.963263  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:18.463418  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:18.963530  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:19.464189  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:19.963540  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:20.463826  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:20.965921  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:21.463476  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:21.964068  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:22.463472  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:22.963536  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:23.464184  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:23.963595  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:24.463908  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:24.963658  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:25.464061  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:25.963338  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:26.465640  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:26.963135  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:27.463766  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:27.963074  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:28.463333  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:28.963517  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:29.463889  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:29.963447  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:30.463906  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:30.963701  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:31.464047  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:31.963528  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:32.464028  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:32.963963  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:33.463261  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:33.963374  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:34.463786  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:34.963514  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:35.463966  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:35.963048  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:36.463020  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:36.963636  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:37.463887  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:37.963272  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:38.463255  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:38.963556  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:39.463947  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:39.963197  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:40.463264  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:40.969720  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:41.464158  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:41.963558  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:42.464292  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:42.963599  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:43.464351  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:43.963558  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:44.464204  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:44.963469  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:45.463835  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:45.963041  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:46.463203  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:46.963581  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:47.463950  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:47.963058  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:48.463312  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:48.963444  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:49.463964  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:49.963216  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:50.463116  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:50.965223  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:51.463361  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:51.963837  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:52.462997  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:52.963331  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:53.463544  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:53.963598  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:54.464187  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:54.963131  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:55.463260  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:55.962965  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:56.463298  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:56.963588  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:57.463920  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:57.963206  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:58.463326  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:58.963296  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:59.465017  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:10:59.963026  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:00.463343  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:00.969258  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:01.463460  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:01.963150  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:02.463972  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:02.963572  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:03.463875  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:03.963160  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:04.463598  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:04.963822  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:05.463410  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:05.963659  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:06.464079  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:06.963472  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:07.463868  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:07.963171  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:08.463384  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:08.963655  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:09.464100  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:09.963861  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:10.463198  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:10.970443  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:11.463719  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:11.963360  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:12.463622  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:12.964174  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:13.463589  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:13.963944  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:14.463082  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:14.963312  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:15.463689  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:15.963598  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:16.463803  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:16.963246  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:17.463464  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:17.963635  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:18.463905  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:18.963114  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:19.463754  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:19.964191  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:20.463316  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:20.964915  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:21.463199  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:21.963384  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:22.463767  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:22.963085  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:23.463107  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:23.963233  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:24.463189  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:24.963129  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:25.463321  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:25.963583  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:26.463978  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:26.963371  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:27.463491  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:27.963704  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:28.464030  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:28.963477  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:29.463808  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:29.963202  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:30.463547  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:30.968568  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:31.464009  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:31.963480  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:32.463988  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:32.964037  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:33.463533  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:33.963847  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:34.463078  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:34.963036  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:35.463345  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:35.963479  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:36.463914  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:36.963647  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:37.464066  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:37.963403  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:38.463627  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:38.963794  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:39.463296  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:39.963778  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:40.463239  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:40.964607  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:41.463944  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:41.963441  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:42.464371  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:42.963722  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:43.464323  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:43.963726  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:44.463075  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:44.963138  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:45.465404  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:45.963620  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:46.463834  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:46.963522  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:47.463757  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:47.964208  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:48.463617  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:48.963805  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:49.463475  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:49.963684  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:50.464235  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:50.964744  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:51.463209  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:51.963776  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:52.464223  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:52.963495  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:53.464107  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:53.963426  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:54.463866  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:54.963189  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:55.463459  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:55.963394  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:56.463885  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:56.963223  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:57.463494  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:57.963291  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:58.463564  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:58.963654  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:59.464123  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:11:59.963627  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:00.464599  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:00.964891  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:01.463530  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:01.963174  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:02.463143  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:02.963745  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:03.463888  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:03.963432  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:04.463912  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:04.963102  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:05.463229  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:05.963416  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:06.463645  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:06.962981  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:07.463335  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:07.963901  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:08.463253  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:08.963180  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:09.463750  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:09.963572  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:10.463827  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:10.964857  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:11.463020  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:11.963625  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:12.463742  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:12.964027  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:13.463312  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:13.963652  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:14.464435  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:14.963236  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:15.463352  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:15.964202  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:16.463363  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:16.963789  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:17.463669  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:17.963729  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:18.463089  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:18.963414  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:19.463584  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:19.963549  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:20.463753  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:20.966700  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:21.463168  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:21.963728  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:22.463988  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:22.963304  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:23.463424  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:23.963976  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:24.463158  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:24.963578  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:25.464011  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:25.963223  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:26.463353  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:26.963852  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:27.463178  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:27.963342  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:28.463538  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:28.963509  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:29.463988  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:29.963472  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:30.463665  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:30.963552  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:31.463847  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:31.963494  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:32.463683  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:32.963512  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:33.463704  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:33.963587  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:34.463878  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:34.963135  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:35.463390  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:35.963634  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:36.463838  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:36.963351  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:37.463702  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:37.963522  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:38.463747  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:38.963674  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:39.464074  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:39.964110  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:40.463371  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:40.963631  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:41.463881  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:41.963334  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:42.463610  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:42.964050  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:43.463280  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:43.963524  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:44.463958  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:44.963093  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:45.463482  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:45.963494  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:46.464386  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:46.963793  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:47.462963  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:47.963316  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:48.463615  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:48.964352  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:49.463572  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:49.963513  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:50.463913  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:50.964624  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:51.463974  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:51.963573  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:52.464274  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:52.963550  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:53.464978  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:53.963201  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:54.463148  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:54.963018  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:55.463440  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:55.963399  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:56.463962  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:56.965488  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:57.463811  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:57.963124  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:58.463493  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:58.963701  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:59.464083  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:12:59.963366  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:00.464055  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:00.963687  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:01.463895  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:01.963532  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:02.463627  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:02.963591  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:03.463956  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:03.964649  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:04.463912  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:04.963153  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:05.463132  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:05.963081  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:06.463319  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:06.963515  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:07.464099  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:07.963227  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:08.463231  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:08.963530  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:09.464033  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:09.963305  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:10.463670  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:10.969793  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:11.463267  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:11.964126  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:12.463153  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:12.963472  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:13.463215  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:13.969266  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:14.463630  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:14.963482  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:15.463664  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:15.963831  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:16.463045  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:16.963862  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:17.463348  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:17.963920  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:18.463192  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:18.963002  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:19.463251  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:19.963518  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:20.463711  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:20.967060  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:21.463310  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:21.963909  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:22.463097  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:22.963535  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:23.463993  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:23.963345  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:24.463696  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:24.964189  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:25.463284  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:25.963415  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:26.463281  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:26.963794  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:27.463241  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:27.963112  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:28.463243  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:28.963272  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:29.463158  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:29.962951  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:30.463297  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:30.964130  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:31.465123  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:31.963547  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:32.463729  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:32.963988  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:33.463494  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:33.963464  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:34.463786  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:34.962969  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:35.463074  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:35.963129  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:36.463276  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:36.963971  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:37.463131  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:37.963188  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:38.463156  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:38.963165  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:39.462959  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:39.963044  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:40.463343  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:40.970373  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:41.463635  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:41.963299  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:42.463909  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:42.963875  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:43.463188  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:43.963077  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:44.463401  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:44.963476  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:45.463841  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:45.963857  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:46.463299  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:46.963310  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:47.463642  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:47.963837  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:48.464257  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:48.963629  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:49.464112  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:49.963689  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:50.464232  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:50.964687  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:51.464098  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:51.963669  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:52.463909  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:52.963683  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:53.464369  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:53.963670  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:54.463991  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:54.963047  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:55.463268  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:55.963081  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:56.463556  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:56.963930  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:57.463257  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:57.963634  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:58.463969  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:58.963401  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:59.463844  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:13:59.963220  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:00.464050  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:00.964258  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:01.464156  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:01.963561  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:02.463876  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:02.963944  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:03.463186  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:03.963534  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:04.463716  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:04.963506  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:05.463815  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:05.963156  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:06.462994  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:06.963413  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:07.463840  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:07.963184  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:08.463055  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:08.963539  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:09.463986  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:09.963382  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:10.463940  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:10.963503  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:11.463782  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:11.963548  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:12.463912  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:12.964181  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:13.463344  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:13.963650  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:14.464059  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:14.963283  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:15.463875  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:15.963494  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:16.463897  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:16.963469  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:17.463743  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:17.963520  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:18.463958  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:18.963286  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:19.463560  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:19.963581  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:20.464024  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:20.963974  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:21.463275  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:21.964151  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:22.463147  733646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1226 22:14:22.959913  733646 kapi.go:81] temporary error: getting Pods with label selector "app.kubernetes.io/name=ingress-nginx" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1226 22:14:22.959948  733646 kapi.go:107] duration metric: took 6m0.000638306s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1226 22:14:22.962155  733646 out.go:177] 
	W1226 22:14:22.964465  733646 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	W1226 22:14:22.964495  733646 out.go:239] * 
	* 
	W1226 22:14:22.969993  733646 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 22:14:22.972236  733646 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-324559
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-324559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d",
	        "Created": "2023-12-26T22:07:18.971854467Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T22:07:19.302540911Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d/hostname",
	        "HostsPath": "/var/lib/docker/containers/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d/hosts",
	        "LogPath": "/var/lib/docker/containers/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d-json.log",
	        "Name": "/ingress-addon-legacy-324559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-324559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-324559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af62208332817967e38e21b2d62dcd3015730420e4acd6c9bdbc71d008674fa0-init/diff:/var/lib/docker/overlay2/45396a29879cab7c8a67d68e40c59b67c1c0ba964e9ed87a152af8cc5862c477/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af62208332817967e38e21b2d62dcd3015730420e4acd6c9bdbc71d008674fa0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af62208332817967e38e21b2d62dcd3015730420e4acd6c9bdbc71d008674fa0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af62208332817967e38e21b2d62dcd3015730420e4acd6c9bdbc71d008674fa0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-324559",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-324559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-324559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-324559",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-324559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3b543683e4e244300480628472cac8cc83ed7830cea43ebc7aa6f93cc64c660",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33686"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33685"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33682"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33684"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33683"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a3b543683e4e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-324559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4094ae8e0876",
	                        "ingress-addon-legacy-324559"
	                    ],
	                    "NetworkID": "b153fb06ea0ee03a524a832f3d32eaf518f15e5ce2de2b14e3e5d6521310ae6c",
	                    "EndpointID": "338cdf8169de82c124a9f0c9772d55f109e6584bf02d17db2755b29d7f83567d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-324559 -n ingress-addon-legacy-324559
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddonActivation FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-324559 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-324559 logs -n 25: (1.41125833s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-262391 image rm                                             | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-262391               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-262391 image ls                                             | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	| image          | functional-262391 image load                                           | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-262391 image ls                                             | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	| image          | functional-262391 image save --daemon                                  | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-262391               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/test/nested/copy/703036/hosts                                     |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/ssl/certs/703036.pem                                              |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /usr/share/ca-certificates/703036.pem                                  |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/ssl/certs/7030362.pem                                             |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /usr/share/ca-certificates/7030362.pem                                 |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |         |                     |                     |
	| image          | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh pgrep                                            | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-262391 image build -t                                       | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | localhost/my-image:functional-262391                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-262391 image ls                                             | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	| delete         | -p functional-262391                                                   | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	| start          | -p ingress-addon-legacy-324559                                         | ingress-addon-legacy-324559 | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:08 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-324559                                            | ingress-addon-legacy-324559 | jenkins | v1.32.0 | 26 Dec 23 22:08 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 22:06:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 22:06:59.345794  730714 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:06:59.345979  730714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:06:59.345988  730714 out.go:309] Setting ErrFile to fd 2...
	I1226 22:06:59.345994  730714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:06:59.346257  730714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:06:59.346708  730714 out.go:303] Setting JSON to false
	I1226 22:06:59.347567  730714 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20953,"bootTime":1703607466,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:06:59.347645  730714 start.go:138] virtualization:  
	I1226 22:06:59.350658  730714 out.go:177] * [ingress-addon-legacy-324559] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 22:06:59.353593  730714 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:06:59.356030  730714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:06:59.353727  730714 notify.go:220] Checking for updates...
	I1226 22:06:59.360930  730714 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:06:59.363563  730714 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:06:59.366338  730714 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:06:59.369035  730714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:06:59.371569  730714 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:06:59.395736  730714 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:06:59.395850  730714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:06:59.483957  730714 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-26 22:06:59.473710817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:06:59.484053  730714 docker.go:295] overlay module found
	I1226 22:06:59.486771  730714 out.go:177] * Using the docker driver based on user configuration
	I1226 22:06:59.488708  730714 start.go:298] selected driver: docker
	I1226 22:06:59.488744  730714 start.go:902] validating driver "docker" against <nil>
	I1226 22:06:59.488759  730714 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:06:59.489362  730714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:06:59.555423  730714 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-26 22:06:59.546108653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:06:59.555590  730714 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 22:06:59.555837  730714 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 22:06:59.558277  730714 out.go:177] * Using Docker driver with root privileges
	I1226 22:06:59.560324  730714 cni.go:84] Creating CNI manager for ""
	I1226 22:06:59.560348  730714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:06:59.560361  730714 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 22:06:59.560375  730714 start_flags.go:323] config:
	{Name:ingress-addon-legacy-324559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-324559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:06:59.562857  730714 out.go:177] * Starting control plane node ingress-addon-legacy-324559 in cluster ingress-addon-legacy-324559
	I1226 22:06:59.565149  730714 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:06:59.567241  730714 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:06:59.569226  730714 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 22:06:59.569312  730714 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 22:06:59.586379  730714 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 22:06:59.586405  730714 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 22:06:59.636789  730714 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1226 22:06:59.636825  730714 cache.go:56] Caching tarball of preloaded images
	I1226 22:06:59.637009  730714 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 22:06:59.639485  730714 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1226 22:06:59.641684  730714 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1226 22:06:59.753039  730714 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1226 22:07:11.060217  730714 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1226 22:07:11.060321  730714 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1226 22:07:12.250416  730714 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1226 22:07:12.250810  730714 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/config.json ...
	I1226 22:07:12.250843  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/config.json: {Name:mk79c37621425bb429e102f6d976700ae00d3f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:12.251036  730714 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:07:12.251100  730714 start.go:365] acquiring machines lock for ingress-addon-legacy-324559: {Name:mk486fccab415ae2bf346d53fa0d55b82bd64c36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:07:12.251169  730714 start.go:369] acquired machines lock for "ingress-addon-legacy-324559" in 48.458µs
	I1226 22:07:12.251191  730714 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-324559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-324559 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 22:07:12.251265  730714 start.go:125] createHost starting for "" (driver="docker")
	I1226 22:07:12.253787  730714 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1226 22:07:12.254086  730714 start.go:159] libmachine.API.Create for "ingress-addon-legacy-324559" (driver="docker")
	I1226 22:07:12.254114  730714 client.go:168] LocalClient.Create starting
	I1226 22:07:12.254176  730714 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem
	I1226 22:07:12.254238  730714 main.go:141] libmachine: Decoding PEM data...
	I1226 22:07:12.254257  730714 main.go:141] libmachine: Parsing certificate...
	I1226 22:07:12.254306  730714 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem
	I1226 22:07:12.254329  730714 main.go:141] libmachine: Decoding PEM data...
	I1226 22:07:12.254344  730714 main.go:141] libmachine: Parsing certificate...
	I1226 22:07:12.254766  730714 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-324559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 22:07:12.272705  730714 cli_runner.go:211] docker network inspect ingress-addon-legacy-324559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 22:07:12.272790  730714 network_create.go:281] running [docker network inspect ingress-addon-legacy-324559] to gather additional debugging logs...
	I1226 22:07:12.272811  730714 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-324559
	W1226 22:07:12.290733  730714 cli_runner.go:211] docker network inspect ingress-addon-legacy-324559 returned with exit code 1
	I1226 22:07:12.290768  730714 network_create.go:284] error running [docker network inspect ingress-addon-legacy-324559]: docker network inspect ingress-addon-legacy-324559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-324559 not found
	I1226 22:07:12.290785  730714 network_create.go:286] output of [docker network inspect ingress-addon-legacy-324559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-324559 not found
	
	** /stderr **
	I1226 22:07:12.290881  730714 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:07:12.308151  730714 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004e0240}
	I1226 22:07:12.308198  730714 network_create.go:124] attempt to create docker network ingress-addon-legacy-324559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1226 22:07:12.308256  730714 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 ingress-addon-legacy-324559
	I1226 22:07:12.384279  730714 network_create.go:108] docker network ingress-addon-legacy-324559 192.168.49.0/24 created
	I1226 22:07:12.384313  730714 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-324559" container
	I1226 22:07:12.384413  730714 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 22:07:12.401520  730714 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-324559 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --label created_by.minikube.sigs.k8s.io=true
	I1226 22:07:12.420464  730714 oci.go:103] Successfully created a docker volume ingress-addon-legacy-324559
	I1226 22:07:12.420618  730714 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-324559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --entrypoint /usr/bin/test -v ingress-addon-legacy-324559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 22:07:13.921405  730714 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-324559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --entrypoint /usr/bin/test -v ingress-addon-legacy-324559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.500745419s)
	I1226 22:07:13.921441  730714 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-324559
	I1226 22:07:13.921468  730714 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 22:07:13.921487  730714 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 22:07:13.921573  730714 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-324559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 22:07:18.882381  730714 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-324559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.960763672s)
	I1226 22:07:18.882413  730714 kic.go:203] duration metric: took 4.960924 seconds to extract preloaded images to volume
	W1226 22:07:18.882557  730714 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 22:07:18.882667  730714 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 22:07:18.954854  730714 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-324559 --name ingress-addon-legacy-324559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --network ingress-addon-legacy-324559 --ip 192.168.49.2 --volume ingress-addon-legacy-324559:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 22:07:19.311019  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Running}}
	I1226 22:07:19.335024  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:07:19.358303  730714 cli_runner.go:164] Run: docker exec ingress-addon-legacy-324559 stat /var/lib/dpkg/alternatives/iptables
	I1226 22:07:19.441188  730714 oci.go:144] the created container "ingress-addon-legacy-324559" has a running status.
	I1226 22:07:19.441223  730714 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa...
	I1226 22:07:20.027884  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1226 22:07:20.027941  730714 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 22:07:20.068037  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:07:20.096273  730714 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 22:07:20.096301  730714 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-324559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 22:07:20.178375  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:07:20.217007  730714 machine.go:88] provisioning docker machine ...
	I1226 22:07:20.217040  730714 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-324559"
	I1226 22:07:20.217109  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:20.250112  730714 main.go:141] libmachine: Using SSH client type: native
	I1226 22:07:20.250552  730714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33686 <nil> <nil>}
	I1226 22:07:20.250573  730714 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-324559 && echo "ingress-addon-legacy-324559" | sudo tee /etc/hostname
	I1226 22:07:20.428170  730714 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-324559
	
	I1226 22:07:20.428311  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:20.459683  730714 main.go:141] libmachine: Using SSH client type: native
	I1226 22:07:20.460082  730714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33686 <nil> <nil>}
	I1226 22:07:20.460106  730714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-324559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-324559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-324559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:07:20.609941  730714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:07:20.609968  730714 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 22:07:20.609997  730714 ubuntu.go:177] setting up certificates
	I1226 22:07:20.610011  730714 provision.go:83] configureAuth start
	I1226 22:07:20.610072  730714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-324559
	I1226 22:07:20.633095  730714 provision.go:138] copyHostCerts
	I1226 22:07:20.633152  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:07:20.633184  730714 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem, removing ...
	I1226 22:07:20.633196  730714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:07:20.633265  730714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 22:07:20.633362  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:07:20.633387  730714 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem, removing ...
	I1226 22:07:20.633395  730714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:07:20.633422  730714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 22:07:20.633478  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:07:20.633502  730714 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem, removing ...
	I1226 22:07:20.633510  730714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:07:20.633536  730714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 22:07:20.633584  730714 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-324559 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-324559]
	I1226 22:07:20.981925  730714 provision.go:172] copyRemoteCerts
	I1226 22:07:20.982025  730714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:07:20.982076  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.000912  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.103433  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1226 22:07:21.103497  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:07:21.133442  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1226 22:07:21.133513  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1226 22:07:21.165992  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1226 22:07:21.166100  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:07:21.196658  730714 provision.go:86] duration metric: configureAuth took 586.612094ms
	I1226 22:07:21.196725  730714 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:07:21.196941  730714 config.go:182] Loaded profile config "ingress-addon-legacy-324559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1226 22:07:21.197051  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.218499  730714 main.go:141] libmachine: Using SSH client type: native
	I1226 22:07:21.218934  730714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33686 <nil> <nil>}
	I1226 22:07:21.218962  730714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:07:21.502073  730714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:07:21.502098  730714 machine.go:91] provisioned docker machine in 1.285068182s
	I1226 22:07:21.502107  730714 client.go:171] LocalClient.Create took 9.247985258s
	I1226 22:07:21.502121  730714 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-324559" took 9.248036342s
	I1226 22:07:21.502128  730714 start.go:300] post-start starting for "ingress-addon-legacy-324559" (driver="docker")
	I1226 22:07:21.502141  730714 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:07:21.502208  730714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:07:21.502258  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.520366  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.624815  730714 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:07:21.629306  730714 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:07:21.629343  730714 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:07:21.629355  730714 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:07:21.629362  730714 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 22:07:21.629376  730714 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 22:07:21.629448  730714 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 22:07:21.629557  730714 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> 7030362.pem in /etc/ssl/certs
	I1226 22:07:21.629568  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> /etc/ssl/certs/7030362.pem
	I1226 22:07:21.629707  730714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:07:21.641283  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:07:21.670457  730714 start.go:303] post-start completed in 168.310587ms
	I1226 22:07:21.670845  730714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-324559
	I1226 22:07:21.689032  730714 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/config.json ...
	I1226 22:07:21.689422  730714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:07:21.689474  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.711505  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.807132  730714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:07:21.813668  730714 start.go:128] duration metric: createHost completed in 9.562376059s
	I1226 22:07:21.813698  730714 start.go:83] releasing machines lock for "ingress-addon-legacy-324559", held for 9.562518489s
	I1226 22:07:21.813792  730714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-324559
	I1226 22:07:21.831415  730714 ssh_runner.go:195] Run: cat /version.json
	I1226 22:07:21.831426  730714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:07:21.831481  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.831486  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.852418  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.862296  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.949110  730714 ssh_runner.go:195] Run: systemctl --version
	I1226 22:07:22.088501  730714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:07:22.237809  730714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:07:22.243226  730714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:07:22.266246  730714 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:07:22.266333  730714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:07:22.309056  730714 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 22:07:22.309084  730714 start.go:475] detecting cgroup driver to use...
	I1226 22:07:22.309117  730714 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:07:22.309170  730714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:07:22.328613  730714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:07:22.342786  730714 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:07:22.342852  730714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:07:22.358160  730714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:07:22.374425  730714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 22:07:22.485238  730714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:07:22.595842  730714 docker.go:219] disabling docker service ...
	I1226 22:07:22.595938  730714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:07:22.618459  730714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:07:22.631801  730714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:07:22.730761  730714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:07:22.835661  730714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:07:22.850646  730714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:07:22.872192  730714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1226 22:07:22.872286  730714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:07:22.888421  730714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 22:07:22.888593  730714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:07:22.901901  730714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:07:22.914541  730714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:07:22.927634  730714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 22:07:22.939613  730714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 22:07:22.950805  730714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 22:07:22.960983  730714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:07:23.051160  730714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 22:07:23.176058  730714 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 22:07:23.176156  730714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 22:07:23.180841  730714 start.go:543] Will wait 60s for crictl version
	I1226 22:07:23.180924  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:23.185416  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 22:07:23.228100  730714 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 22:07:23.228241  730714 ssh_runner.go:195] Run: crio --version
	I1226 22:07:23.272314  730714 ssh_runner.go:195] Run: crio --version
	I1226 22:07:23.321818  730714 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1226 22:07:23.323678  730714 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-324559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:07:23.341216  730714 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1226 22:07:23.345878  730714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:07:23.359804  730714 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 22:07:23.359875  730714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:07:23.412973  730714 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1226 22:07:23.413047  730714 ssh_runner.go:195] Run: which lz4
	I1226 22:07:23.417471  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1226 22:07:23.417570  730714 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1226 22:07:23.421705  730714 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1226 22:07:23.421740  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1226 22:07:25.598401  730714 crio.go:444] Took 2.180869 seconds to copy over tarball
	I1226 22:07:25.598478  730714 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1226 22:07:28.274242  730714 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.675734587s)
	I1226 22:07:28.274266  730714 crio.go:451] Took 2.675841 seconds to extract the tarball
	I1226 22:07:28.274276  730714 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1226 22:07:28.359451  730714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:07:28.399196  730714 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1226 22:07:28.399226  730714 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1226 22:07:28.399276  730714 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:07:28.399503  730714 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 22:07:28.399581  730714 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 22:07:28.399662  730714 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 22:07:28.399734  730714 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:28.399791  730714 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1226 22:07:28.399845  730714 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:28.399906  730714 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1226 22:07:28.400837  730714 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 22:07:28.401241  730714 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1226 22:07:28.401394  730714 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:28.401527  730714 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1226 22:07:28.401662  730714 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:28.401782  730714 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 22:07:28.401911  730714 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 22:07:28.402035  730714 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W1226 22:07:28.758142  730714 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.758398  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1226 22:07:28.763029  730714 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.763261  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1226 22:07:28.779656  730714 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	W1226 22:07:28.779751  730714 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.779856  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:28.779981  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1226 22:07:28.786689  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1226 22:07:28.793487  730714 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.793742  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1226 22:07:28.804435  730714 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.804801  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:28.899832  730714 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1226 22:07:28.899892  730714 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 22:07:28.899947  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:28.909229  730714 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1226 22:07:28.909270  730714 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 22:07:28.909321  730714 ssh_runner.go:195] Run: which crictl
	W1226 22:07:28.933249  730714 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.933414  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:07:28.959519  730714 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1226 22:07:28.959562  730714 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:28.959618  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:28.966878  730714 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1226 22:07:28.966915  730714 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1226 22:07:28.966967  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:28.967044  730714 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1226 22:07:28.967067  730714 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1226 22:07:28.967092  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:29.004950  730714 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1226 22:07:29.004999  730714 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 22:07:29.005052  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:29.015013  730714 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1226 22:07:29.015059  730714 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:29.015141  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:29.015220  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 22:07:29.015280  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1226 22:07:29.139634  730714 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1226 22:07:29.139682  730714 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:07:29.139732  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:29.139832  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:29.139863  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1226 22:07:29.139917  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1226 22:07:29.140010  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1226 22:07:29.140054  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1226 22:07:29.140102  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:29.140151  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1226 22:07:29.288542  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1226 22:07:29.288599  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1226 22:07:29.288643  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1226 22:07:29.288687  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1226 22:07:29.288720  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:07:29.288731  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1226 22:07:29.362144  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1226 22:07:29.362258  730714 cache_images.go:92] LoadImages completed in 963.016699ms
	W1226 22:07:29.362330  730714 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I1226 22:07:29.362404  730714 ssh_runner.go:195] Run: crio config
	I1226 22:07:29.424895  730714 cni.go:84] Creating CNI manager for ""
	I1226 22:07:29.424919  730714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:07:29.424973  730714 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 22:07:29.425000  730714 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-324559 NodeName:ingress-addon-legacy-324559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1226 22:07:29.425192  730714 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-324559"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 22:07:29.425284  730714 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-324559 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-324559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 22:07:29.425391  730714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1226 22:07:29.436322  730714 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 22:07:29.436435  730714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 22:07:29.447354  730714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1226 22:07:29.468756  730714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1226 22:07:29.490440  730714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1226 22:07:29.511947  730714 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1226 22:07:29.516631  730714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:07:29.530156  730714 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559 for IP: 192.168.49.2
	I1226 22:07:29.530189  730714 certs.go:190] acquiring lock for shared ca certs: {Name:mke6488a150c186a525017f74b8a69a9f5240d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:29.530384  730714 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key
	I1226 22:07:29.530430  730714 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key
	I1226 22:07:29.530487  730714 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key
	I1226 22:07:29.530501  730714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt with IP's: []
	I1226 22:07:29.972524  730714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt ...
	I1226 22:07:29.972555  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: {Name:mk8a22fe6e0abd719a82f98f1fe6479d73ab1657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:29.972755  730714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key ...
	I1226 22:07:29.972769  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key: {Name:mk9d727a6f66199779723590026fbcb60bde4dcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:29.972858  730714 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key.dd3b5fb2
	I1226 22:07:29.972876  730714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 22:07:30.419952  730714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt.dd3b5fb2 ...
	I1226 22:07:30.419983  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt.dd3b5fb2: {Name:mk00e9e189795c3e50287394df7a3f2d3d3de7ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:30.420176  730714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key.dd3b5fb2 ...
	I1226 22:07:30.420190  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key.dd3b5fb2: {Name:mk770b7073fc81eb854c1f0707b6a612caa7058b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:30.420274  730714 certs.go:337] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt
	I1226 22:07:30.420350  730714 certs.go:341] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key
	I1226 22:07:30.420410  730714 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key
	I1226 22:07:30.420426  730714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt with IP's: []
	I1226 22:07:30.598927  730714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt ...
	I1226 22:07:30.598959  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt: {Name:mkb3c8029447d34cff0a9e60b1b875fff68b3905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:30.599145  730714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key ...
	I1226 22:07:30.599158  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key: {Name:mk3a636739633a15143f255b034abd0774f605b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:30.599242  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1226 22:07:30.599262  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1226 22:07:30.599274  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1226 22:07:30.599290  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1226 22:07:30.599301  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 22:07:30.599316  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1226 22:07:30.599327  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 22:07:30.599341  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 22:07:30.599409  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem (1338 bytes)
	W1226 22:07:30.599444  730714 certs.go:433] ignoring /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036_empty.pem, impossibly tiny 0 bytes
	I1226 22:07:30.599458  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem (1675 bytes)
	I1226 22:07:30.599491  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem (1082 bytes)
	I1226 22:07:30.599518  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem (1123 bytes)
	I1226 22:07:30.599550  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem (1679 bytes)
	I1226 22:07:30.599598  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:07:30.599631  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem -> /usr/share/ca-certificates/703036.pem
	I1226 22:07:30.599652  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> /usr/share/ca-certificates/7030362.pem
	I1226 22:07:30.599667  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:07:30.600269  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 22:07:30.629179  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1226 22:07:30.658193  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 22:07:30.686902  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1226 22:07:30.715874  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 22:07:30.745033  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 22:07:30.773521  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 22:07:30.802255  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 22:07:30.831162  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem --> /usr/share/ca-certificates/703036.pem (1338 bytes)
	I1226 22:07:30.860045  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /usr/share/ca-certificates/7030362.pem (1708 bytes)
	I1226 22:07:30.888772  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 22:07:30.918212  730714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 22:07:30.940057  730714 ssh_runner.go:195] Run: openssl version
	I1226 22:07:30.947558  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/703036.pem && ln -fs /usr/share/ca-certificates/703036.pem /etc/ssl/certs/703036.pem"
	I1226 22:07:30.959317  730714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/703036.pem
	I1226 22:07:30.964090  730714 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 21:58 /usr/share/ca-certificates/703036.pem
	I1226 22:07:30.964198  730714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/703036.pem
	I1226 22:07:30.972892  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/703036.pem /etc/ssl/certs/51391683.0"
	I1226 22:07:30.984569  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7030362.pem && ln -fs /usr/share/ca-certificates/7030362.pem /etc/ssl/certs/7030362.pem"
	I1226 22:07:30.996374  730714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7030362.pem
	I1226 22:07:31.002047  730714 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 21:58 /usr/share/ca-certificates/7030362.pem
	I1226 22:07:31.002130  730714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7030362.pem
	I1226 22:07:31.013541  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7030362.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 22:07:31.026013  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 22:07:31.038083  730714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:07:31.042849  730714 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:07:31.042947  730714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:07:31.051799  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 22:07:31.063890  730714 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 22:07:31.068613  730714 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:07:31.068670  730714 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-324559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-324559 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:07:31.068745  730714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 22:07:31.068810  730714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 22:07:31.112837  730714 cri.go:89] found id: ""
	I1226 22:07:31.112923  730714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 22:07:31.124157  730714 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 22:07:31.135237  730714 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 22:07:31.135345  730714 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 22:07:31.146460  730714 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 22:07:31.146505  730714 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 22:07:31.201579  730714 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1226 22:07:31.201934  730714 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 22:07:31.254257  730714 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1226 22:07:31.254329  730714 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1226 22:07:31.254373  730714 kubeadm.go:322] OS: Linux
	I1226 22:07:31.254422  730714 kubeadm.go:322] CGROUPS_CPU: enabled
	I1226 22:07:31.254472  730714 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1226 22:07:31.254521  730714 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1226 22:07:31.254570  730714 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1226 22:07:31.254620  730714 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1226 22:07:31.254674  730714 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1226 22:07:31.348764  730714 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 22:07:31.348956  730714 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 22:07:31.349114  730714 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 22:07:31.590536  730714 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:07:31.592273  730714 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:07:31.592511  730714 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 22:07:31.701044  730714 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 22:07:31.705920  730714 out.go:204]   - Generating certificates and keys ...
	I1226 22:07:31.706030  730714 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 22:07:31.706122  730714 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 22:07:31.980294  730714 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 22:07:32.161749  730714 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 22:07:32.795328  730714 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 22:07:33.170219  730714 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 22:07:33.728883  730714 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 22:07:33.729478  730714 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-324559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 22:07:34.153482  730714 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 22:07:34.153709  730714 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-324559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 22:07:34.410856  730714 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 22:07:34.690243  730714 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 22:07:35.559562  730714 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 22:07:35.559885  730714 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 22:07:36.412816  730714 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 22:07:36.962283  730714 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 22:07:37.414406  730714 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 22:07:37.882580  730714 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 22:07:37.883346  730714 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 22:07:37.885831  730714 out.go:204]   - Booting up control plane ...
	I1226 22:07:37.885946  730714 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 22:07:37.892833  730714 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 22:07:37.894707  730714 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 22:07:37.902287  730714 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 22:07:37.909625  730714 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 22:07:50.412846  730714 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.503047 seconds
	I1226 22:07:50.412961  730714 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 22:07:50.426955  730714 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 22:07:50.949310  730714 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 22:07:50.949471  730714 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-324559 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1226 22:07:51.458499  730714 kubeadm.go:322] [bootstrap-token] Using token: yb40p9.8404dfnyapvgue80
	I1226 22:07:51.460820  730714 out.go:204]   - Configuring RBAC rules ...
	I1226 22:07:51.460941  730714 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 22:07:51.466516  730714 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 22:07:51.474505  730714 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 22:07:51.477881  730714 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 22:07:51.481201  730714 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 22:07:51.484959  730714 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 22:07:51.496930  730714 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 22:07:51.840700  730714 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 22:07:51.936915  730714 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 22:07:51.941469  730714 kubeadm.go:322] 
	I1226 22:07:51.941562  730714 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 22:07:51.941582  730714 kubeadm.go:322] 
	I1226 22:07:51.941655  730714 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 22:07:51.941663  730714 kubeadm.go:322] 
	I1226 22:07:51.941691  730714 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 22:07:51.941749  730714 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 22:07:51.941807  730714 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 22:07:51.941816  730714 kubeadm.go:322] 
	I1226 22:07:51.941866  730714 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 22:07:51.941948  730714 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 22:07:51.942019  730714 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 22:07:51.942030  730714 kubeadm.go:322] 
	I1226 22:07:51.942119  730714 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 22:07:51.942194  730714 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 22:07:51.942201  730714 kubeadm.go:322] 
	I1226 22:07:51.942311  730714 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token yb40p9.8404dfnyapvgue80 \
	I1226 22:07:51.942422  730714 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 \
	I1226 22:07:51.942446  730714 kubeadm.go:322]     --control-plane 
	I1226 22:07:51.942452  730714 kubeadm.go:322] 
	I1226 22:07:51.942531  730714 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 22:07:51.942545  730714 kubeadm.go:322] 
	I1226 22:07:51.942623  730714 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token yb40p9.8404dfnyapvgue80 \
	I1226 22:07:51.942723  730714 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 
	I1226 22:07:51.942887  730714 kubeadm.go:322] W1226 22:07:31.200684    1226 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1226 22:07:51.943119  730714 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1226 22:07:51.943263  730714 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:07:51.943401  730714 kubeadm.go:322] W1226 22:07:37.892502    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 22:07:51.943529  730714 kubeadm.go:322] W1226 22:07:37.894877    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 22:07:51.943536  730714 cni.go:84] Creating CNI manager for ""
	I1226 22:07:51.943544  730714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:07:51.945631  730714 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 22:07:51.947621  730714 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 22:07:51.952994  730714 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1226 22:07:51.953021  730714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 22:07:51.983255  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 22:07:52.441372  730714 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 22:07:52.441466  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:52.441489  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=ingress-addon-legacy-324559 minikube.k8s.io/updated_at=2023_12_26T22_07_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:52.462949  730714 ops.go:34] apiserver oom_adj: -16
	I1226 22:07:52.567458  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:53.067764  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:53.568299  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:54.067689  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:54.568116  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:55.068577  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:55.568346  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:56.067674  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:56.567557  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:57.067607  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:57.568469  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:58.068402  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:58.567679  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:59.068180  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:59.568323  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:00.068649  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:00.567634  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:01.068270  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:01.568062  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:02.067967  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:02.567788  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:03.067915  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:03.568493  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:04.068568  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:04.567595  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:05.068312  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:05.567741  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:06.067579  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:06.567563  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:07.068314  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:07.171114  730714 kubeadm.go:1088] duration metric: took 14.729731541s to wait for elevateKubeSystemPrivileges.
	I1226 22:08:07.171170  730714 kubeadm.go:406] StartCluster complete in 36.102505887s
	I1226 22:08:07.171187  730714 settings.go:142] acquiring lock: {Name:mk1b89d623875ac96830001bdd0fc2b8d8c10aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:08:07.171248  730714 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:08:07.171995  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/kubeconfig: {Name:mk171fc32e21f516abb68bc5ebeb628b3c1d7f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:08:07.172585  730714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 22:08:07.172886  730714 config.go:182] Loaded profile config "ingress-addon-legacy-324559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1226 22:08:07.173036  730714 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1226 22:08:07.173187  730714 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-324559"
	I1226 22:08:07.173210  730714 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-324559"
	I1226 22:08:07.173247  730714 host.go:66] Checking if "ingress-addon-legacy-324559" exists ...
	I1226 22:08:07.173152  730714 kapi.go:59] client config for ingress-addon-legacy-324559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:08:07.173743  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:08:07.174270  730714 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-324559"
	I1226 22:08:07.174297  730714 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-324559"
	I1226 22:08:07.174587  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:08:07.174855  730714 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 22:08:07.228315  730714 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:08:07.226991  730714 kapi.go:59] client config for ingress-addon-legacy-324559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:08:07.230861  730714 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:08:07.230880  730714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 22:08:07.230945  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:08:07.230973  730714 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-324559"
	I1226 22:08:07.231005  730714 host.go:66] Checking if "ingress-addon-legacy-324559" exists ...
	I1226 22:08:07.231494  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:08:07.264296  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:08:07.281609  730714 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 22:08:07.281631  730714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 22:08:07.281693  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:08:07.305369  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:08:07.494641  730714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:08:07.524329  730714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 22:08:07.569614  730714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 22:08:07.693712  730714 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-324559" context rescaled to 1 replicas
	I1226 22:08:07.693755  730714 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 22:08:07.695994  730714 out.go:177] * Verifying Kubernetes components...
	I1226 22:08:07.698790  730714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:08:08.380254  730714 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1226 22:08:08.380889  730714 kapi.go:59] client config for ingress-addon-legacy-324559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:08:08.381145  730714 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-324559" to be "Ready" ...
	I1226 22:08:08.464566  730714 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1226 22:08:08.466871  730714 addons.go:508] enable addons completed in 1.293843953s: enabled=[storage-provisioner default-storageclass]
	I1226 22:08:10.384868  730714 node_ready.go:58] node "ingress-addon-legacy-324559" has status "Ready":"False"
	I1226 22:08:12.384914  730714 node_ready.go:58] node "ingress-addon-legacy-324559" has status "Ready":"False"
	I1226 22:08:14.885051  730714 node_ready.go:58] node "ingress-addon-legacy-324559" has status "Ready":"False"
	I1226 22:08:15.384900  730714 node_ready.go:49] node "ingress-addon-legacy-324559" has status "Ready":"True"
	I1226 22:08:15.384926  730714 node_ready.go:38] duration metric: took 7.00376792s waiting for node "ingress-addon-legacy-324559" to be "Ready" ...
	I1226 22:08:15.384935  730714 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:08:15.394258  730714 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:17.397665  730714 pod_ready.go:102] pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-26 22:08:07 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1226 22:08:19.400834  730714 pod_ready.go:102] pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace has status "Ready":"False"
	I1226 22:08:20.401095  730714 pod_ready.go:92] pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.401122  730714 pod_ready.go:81] duration metric: took 5.006826566s waiting for pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.401134  730714 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.413464  730714 pod_ready.go:92] pod "etcd-ingress-addon-legacy-324559" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.413497  730714 pod_ready.go:81] duration metric: took 12.353579ms waiting for pod "etcd-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.413520  730714 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.419526  730714 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-324559" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.419553  730714 pod_ready.go:81] duration metric: took 6.02474ms waiting for pod "kube-apiserver-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.419565  730714 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.426152  730714 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-324559" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.426221  730714 pod_ready.go:81] duration metric: took 6.647314ms waiting for pod "kube-controller-manager-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.426249  730714 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nv5jt" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.442495  730714 pod_ready.go:92] pod "kube-proxy-nv5jt" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.442558  730714 pod_ready.go:81] duration metric: took 16.288192ms waiting for pod "kube-proxy-nv5jt" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.442586  730714 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.596001  730714 request.go:629] Waited for 153.269152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-324559
	I1226 22:08:20.795952  730714 request.go:629] Waited for 197.313162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-324559
	I1226 22:08:20.798863  730714 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-324559" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.798892  730714 pod_ready.go:81] duration metric: took 356.282154ms waiting for pod "kube-scheduler-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.798906  730714 pod_ready.go:38] duration metric: took 5.413959031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:08:20.798920  730714 api_server.go:52] waiting for apiserver process to appear ...
	I1226 22:08:20.798983  730714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:08:20.812456  730714 api_server.go:72] duration metric: took 13.118667363s to wait for apiserver process to appear ...
	I1226 22:08:20.812580  730714 api_server.go:88] waiting for apiserver healthz status ...
	I1226 22:08:20.812615  730714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1226 22:08:20.821561  730714 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1226 22:08:20.822511  730714 api_server.go:141] control plane version: v1.18.20
	I1226 22:08:20.822539  730714 api_server.go:131] duration metric: took 9.935075ms to wait for apiserver health ...
	I1226 22:08:20.822548  730714 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 22:08:20.995947  730714 request.go:629] Waited for 173.311235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:08:21.001858  730714 system_pods.go:59] 8 kube-system pods found
	I1226 22:08:21.001899  730714 system_pods.go:61] "coredns-66bff467f8-lsmfr" [57d86f7d-5932-4ab4-ab83-a9ffd33cbc12] Running
	I1226 22:08:21.001906  730714 system_pods.go:61] "etcd-ingress-addon-legacy-324559" [787ba3f5-4dcb-4c02-99cd-b635e2a60d83] Running
	I1226 22:08:21.001913  730714 system_pods.go:61] "kindnet-xp2bf" [53d917f0-8851-4f9a-95bd-ecf62017fc1d] Running
	I1226 22:08:21.001919  730714 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-324559" [4f6831ee-03a7-4a54-9b62-0a3af3624f26] Running
	I1226 22:08:21.001925  730714 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-324559" [ffb495d2-e0d1-4362-9ee1-b9809c41b8b0] Running
	I1226 22:08:21.001930  730714 system_pods.go:61] "kube-proxy-nv5jt" [98081056-1e5f-4ad8-bb67-7da69b2e48c3] Running
	I1226 22:08:21.001936  730714 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-324559" [b12f331f-5253-4e48-bf19-0a649cc5a6a7] Running
	I1226 22:08:21.001945  730714 system_pods.go:61] "storage-provisioner" [e04e3e5c-a9ca-4733-b61d-aa5e4f84a94c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1226 22:08:21.001954  730714 system_pods.go:74] duration metric: took 179.379412ms to wait for pod list to return data ...
	I1226 22:08:21.001965  730714 default_sa.go:34] waiting for default service account to be created ...
	I1226 22:08:21.196372  730714 request.go:629] Waited for 194.29737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1226 22:08:21.200821  730714 default_sa.go:45] found service account: "default"
	I1226 22:08:21.200851  730714 default_sa.go:55] duration metric: took 198.880182ms for default service account to be created ...
	I1226 22:08:21.200861  730714 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 22:08:21.396245  730714 request.go:629] Waited for 195.32344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:08:21.402047  730714 system_pods.go:86] 8 kube-system pods found
	I1226 22:08:21.402081  730714 system_pods.go:89] "coredns-66bff467f8-lsmfr" [57d86f7d-5932-4ab4-ab83-a9ffd33cbc12] Running
	I1226 22:08:21.402089  730714 system_pods.go:89] "etcd-ingress-addon-legacy-324559" [787ba3f5-4dcb-4c02-99cd-b635e2a60d83] Running
	I1226 22:08:21.402094  730714 system_pods.go:89] "kindnet-xp2bf" [53d917f0-8851-4f9a-95bd-ecf62017fc1d] Running
	I1226 22:08:21.402129  730714 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-324559" [4f6831ee-03a7-4a54-9b62-0a3af3624f26] Running
	I1226 22:08:21.402140  730714 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-324559" [ffb495d2-e0d1-4362-9ee1-b9809c41b8b0] Running
	I1226 22:08:21.402146  730714 system_pods.go:89] "kube-proxy-nv5jt" [98081056-1e5f-4ad8-bb67-7da69b2e48c3] Running
	I1226 22:08:21.402155  730714 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-324559" [b12f331f-5253-4e48-bf19-0a649cc5a6a7] Running
	I1226 22:08:21.402162  730714 system_pods.go:89] "storage-provisioner" [e04e3e5c-a9ca-4733-b61d-aa5e4f84a94c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1226 22:08:21.402173  730714 system_pods.go:126] duration metric: took 201.305784ms to wait for k8s-apps to be running ...
	I1226 22:08:21.402181  730714 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 22:08:21.402253  730714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:08:21.420968  730714 system_svc.go:56] duration metric: took 18.776471ms WaitForService to wait for kubelet.
	I1226 22:08:21.420996  730714 kubeadm.go:581] duration metric: took 13.727213646s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 22:08:21.421017  730714 node_conditions.go:102] verifying NodePressure condition ...
	I1226 22:08:21.596404  730714 request.go:629] Waited for 175.303082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1226 22:08:21.599442  730714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1226 22:08:21.599474  730714 node_conditions.go:123] node cpu capacity is 2
	I1226 22:08:21.599486  730714 node_conditions.go:105] duration metric: took 178.463092ms to run NodePressure ...
	I1226 22:08:21.599517  730714 start.go:228] waiting for startup goroutines ...
	I1226 22:08:21.599529  730714 start.go:233] waiting for cluster config update ...
	I1226 22:08:21.599539  730714 start.go:242] writing updated cluster config ...
	I1226 22:08:21.599826  730714 ssh_runner.go:195] Run: rm -f paused
	I1226 22:08:21.666507  730714 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I1226 22:08:21.669368  730714 out.go:177] 
	W1226 22:08:21.671674  730714 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I1226 22:08:21.673612  730714 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1226 22:08:21.675473  730714 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-324559" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 26 22:12:34 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:34.248592967Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=1429cfe0-e004-4213-8722-825a40479b2d name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:12:37 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:37.248503580Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=df673811-5ac8-48aa-b13a-cf1dd2d6183b name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:12:37 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:37.248811413Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=df673811-5ac8-48aa-b13a-cf1dd2d6183b name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:12:48 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:48.248386623Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a23b914e-c303-4b59-ad1b-82e2fbaba681 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:12:48 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:48.248736924Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=a23b914e-c303-4b59-ad1b-82e2fbaba681 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:12:48 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:48.249592018Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=5e8b0f27-0b25-4522-9530-dd75861e25de name=/runtime.v1alpha2.ImageService/PullImage
	Dec 26 22:12:48 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:48.252234649Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:12:50 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:50.248315210Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=0e0485cc-8fa6-4287-a9a0-d0d434db9479 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:12:50 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:50.248612474Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=0e0485cc-8fa6-4287-a9a0-d0d434db9479 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:12:55 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:55.163737895Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=9aba98a8-dd87-4993-ba38-8d82e3a9dfc0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:12:55 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:12:55.163975240Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9aba98a8-dd87-4993-ba38-8d82e3a9dfc0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:13:05 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:13:05.248981143Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=2dd73865-35b0-4304-b263-b4ae992affb5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:13:05 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:13:05.249253637Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=2dd73865-35b0-4304-b263-b4ae992affb5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:13:19 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:13:19.248535751Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=d64eeb06-ef3a-4331-a37f-66aa4f2d5ab1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:13:19 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:13:19.248810370Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=d64eeb06-ef3a-4331-a37f-66aa4f2d5ab1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:13:32 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:13:32.757528985Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=7882a022-0a2a-4b45-af2b-72969d0f9516 name=/runtime.v1alpha2.ImageService/PullImage
	Dec 26 22:13:32 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:13:32.759878248Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:13:48 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:13:48.248356460Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=f31afef5-6be0-4a6f-9f2e-495edbe01cca name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:13:48 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:13:48.248689400Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=f31afef5-6be0-4a6f-9f2e-495edbe01cca name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:02 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:02.248492891Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=b3233597-aba0-4e56-855c-3a90d80f4379 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:02 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:02.248784043Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=b3233597-aba0-4e56-855c-3a90d80f4379 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:15 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:15.248690026Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=454e9363-6a6a-4aa9-861c-e036dadfac09 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:15 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:15.248960502Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=454e9363-6a6a-4aa9-861c-e036dadfac09 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:15 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:15.250254675Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=b95093c6-d42a-4cd2-ab23-fc03ba039c7f name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:15 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:15.250519908Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=b95093c6-d42a-4cd2-ab23-fc03ba039c7f name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0aea528753119       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   6 minutes ago       Running             storage-provisioner       0                   d4374744882ae       storage-provisioner
	02f33002d9479       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  6 minutes ago       Running             coredns                   0                   930284210ecee       coredns-66bff467f8-lsmfr
	dce0f84a81950       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                6 minutes ago       Running             kindnet-cni               0                   1fe757aa81c2b       kindnet-xp2bf
	36c3a5e7fc3b0       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  6 minutes ago       Running             kube-proxy                0                   f5370a56de2f8       kube-proxy-nv5jt
	beac61d1ace3e       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  6 minutes ago       Running             kube-scheduler            0                   f51e7e7a68e9c       kube-scheduler-ingress-addon-legacy-324559
	46e6d02e3c574       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  6 minutes ago       Running             kube-controller-manager   0                   89cbf22e5fbc6       kube-controller-manager-ingress-addon-legacy-324559
	575c4b5034ded       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  6 minutes ago       Running             kube-apiserver            0                   c996e9805171b       kube-apiserver-ingress-addon-legacy-324559
	e9b1d6041f823       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  6 minutes ago       Running             etcd                      0                   a8e498d029d1c       etcd-ingress-addon-legacy-324559
	
	
	==> coredns [02f33002d9479a64055fccd43ef1ca7ab676214fbd5ccf695f09d9e759813c6e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:55624 - 34283 "HINFO IN 5942584428753798869.8126865998415205935. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.084125034s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-324559
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-324559
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=ingress-addon-legacy-324559
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T22_07_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 22:07:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-324559
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 22:14:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 22:13:25 +0000   Tue, 26 Dec 2023 22:07:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 22:13:25 +0000   Tue, 26 Dec 2023 22:07:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 22:13:25 +0000   Tue, 26 Dec 2023 22:07:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 22:13:25 +0000   Tue, 26 Dec 2023 22:08:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-324559
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 874b2f4d11c64fcb850c2458c0352d0d
	  System UUID:                ab7a2e48-7e2b-4a44-bb00-57f5bc9b375d
	  Boot ID:                    f8f887b2-8c20-433d-a967-90e814370f09
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-b8xk7                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  ingress-nginx               ingress-nginx-admission-patch-h7nr5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-hlm6t              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         6m2s
	  kube-system                 coredns-66bff467f8-lsmfr                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m17s
	  kube-system                 etcd-ingress-addon-legacy-324559                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kindnet-xp2bf                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m18s
	  kube-system                 kube-apiserver-ingress-addon-legacy-324559             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-324559    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-proxy-nv5jt                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-scheduler-ingress-addon-legacy-324559             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  6m43s (x5 over 6m44s)  kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x4 over 6m44s)  kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x4 over 6m44s)  kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m29s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m29s                  kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s                  kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s                  kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m16s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                6m9s                   kubelet     Node ingress-addon-legacy-324559 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001236] FS-Cache: O-key=[8] '14613b0000000000'
	[  +0.000818] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001056] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000146f85f7
	[  +0.001167] FS-Cache: N-key=[8] '14613b0000000000'
	[  +0.003514] FS-Cache: Duplicate cookie detected
	[  +0.000807] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001079] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=0000000079163410
	[  +0.001150] FS-Cache: O-key=[8] '14613b0000000000'
	[  +0.000783] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001037] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000f8dfdcd3
	[  +0.001195] FS-Cache: N-key=[8] '14613b0000000000'
	[  +2.993685] FS-Cache: Duplicate cookie detected
	[  +0.000876] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001217] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=00000000ec3309d9
	[  +0.001222] FS-Cache: O-key=[8] '13613b0000000000'
	[  +0.000879] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001151] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000cf0f6968
	[  +0.001228] FS-Cache: N-key=[8] '13613b0000000000'
	[  +0.372532] FS-Cache: Duplicate cookie detected
	[  +0.000898] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001163] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=0000000068094209
	[  +0.001226] FS-Cache: O-key=[8] '19613b0000000000'
	[  +0.000831] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001168] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=0000000030afdcd3
	[  +0.001211] FS-Cache: N-key=[8] '19613b0000000000'
	
	
	==> etcd [e9b1d6041f823f638d3ff0bcb0d2fd195521e835aa1beea773b245095f9bb10a] <==
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/26 22:07:42 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-26 22:07:42.035422 W | auth: simple token is not cryptographically signed
	2023-12-26 22:07:42.040608 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-26 22:07:42.042967 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-26 22:07:42.043235 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-26 22:07:42.043464 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-26 22:07:42.044426 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-26 22:07:42.045561 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/26 22:07:42 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-26 22:07:42.229233 I | etcdserver: published {Name:ingress-addon-legacy-324559 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-26 22:07:42.229508 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-26 22:07:42.230271 I | embed: ready to serve client requests
	2023-12-26 22:07:42.231052 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-26 22:07:42.231238 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-26 22:07:42.231315 I | embed: ready to serve client requests
	2023-12-26 22:07:42.234416 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-26 22:07:42.287658 I | embed: serving client requests on 192.168.49.2:2379
	
	
	==> kernel <==
	 22:14:24 up  5:56,  0 users,  load average: 0.12, 0.49, 0.88
	Linux ingress-addon-legacy-324559 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [dce0f84a819509c98f957f9b06142244dd890242592ef8778d73ac98742e2356] <==
	I1226 22:12:20.176829       1 main.go:227] handling current node
	I1226 22:12:30.181130       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:12:30.181165       1 main.go:227] handling current node
	I1226 22:12:40.191205       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:12:40.191233       1 main.go:227] handling current node
	I1226 22:12:50.194663       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:12:50.194693       1 main.go:227] handling current node
	I1226 22:13:00.209544       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:13:00.209760       1 main.go:227] handling current node
	I1226 22:13:10.213260       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:13:10.213290       1 main.go:227] handling current node
	I1226 22:13:20.223503       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:13:20.223532       1 main.go:227] handling current node
	I1226 22:13:30.226681       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:13:30.226707       1 main.go:227] handling current node
	I1226 22:13:40.236098       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:13:40.236128       1 main.go:227] handling current node
	I1226 22:13:50.239650       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:13:50.239678       1 main.go:227] handling current node
	I1226 22:14:00.249029       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:14:00.249383       1 main.go:227] handling current node
	I1226 22:14:10.253023       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:14:10.253056       1 main.go:227] handling current node
	I1226 22:14:20.262996       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:14:20.263025       1 main.go:227] handling current node
	
	
	==> kube-apiserver [575c4b5034ded1ed2f54ae4bccbe637a9d78408e528f471d7105f50193c84be5] <==
	I1226 22:07:48.783238       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I1226 22:07:48.783288       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1226 22:07:48.808814       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1226 22:07:48.904272       1 cache.go:39] Caches are synced for autoregister controller
	I1226 22:07:48.904420       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1226 22:07:48.904488       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1226 22:07:48.904549       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1226 22:07:48.904602       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1226 22:07:49.691753       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1226 22:07:49.691780       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1226 22:07:49.697601       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1226 22:07:49.702169       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1226 22:07:49.702194       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1226 22:07:50.119029       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1226 22:07:50.174696       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1226 22:07:50.297300       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1226 22:07:50.298253       1 controller.go:609] quota admission added evaluator for: endpoints
	I1226 22:07:50.303645       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1226 22:07:51.162779       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1226 22:07:51.817954       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1226 22:07:51.905171       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1226 22:07:55.220193       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1226 22:08:06.541582       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1226 22:08:07.227594       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1226 22:08:22.862751       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	
	==> kube-controller-manager [46e6d02e3c5745545bfd24ad3504b526b69d1d83ce3073bf30c82b94071ba620] <==
	E1226 22:08:06.748324       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1226 22:08:06.765593       1 shared_informer.go:230] Caches are synced for attach detach 
	E1226 22:08:06.776295       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1226 22:08:06.959177       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1226 22:08:07.089344       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1226 22:08:07.125121       1 shared_informer.go:230] Caches are synced for resource quota 
	I1226 22:08:07.162551       1 shared_informer.go:230] Caches are synced for disruption 
	I1226 22:08:07.162579       1 disruption.go:339] Sending events to api server.
	I1226 22:08:07.162891       1 shared_informer.go:230] Caches are synced for endpoint 
	I1226 22:08:07.189243       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1226 22:08:07.189267       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1226 22:08:07.210763       1 shared_informer.go:230] Caches are synced for resource quota 
	I1226 22:08:07.211854       1 shared_informer.go:230] Caches are synced for deployment 
	I1226 22:08:07.217267       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I1226 22:08:07.222356       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1226 22:08:07.263900       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3d8984af-e9bb-4c8d-8948-0243aba9c518", APIVersion:"apps/v1", ResourceVersion:"200", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I1226 22:08:07.388063       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6b755a6f-abd1-454c-bf87-f73bb4df476a", APIVersion:"apps/v1", ResourceVersion:"353", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-dzd9l
	I1226 22:08:07.429603       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6b755a6f-abd1-454c-bf87-f73bb4df476a", APIVersion:"apps/v1", ResourceVersion:"353", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-lsmfr
	I1226 22:08:07.457881       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3d8984af-e9bb-4c8d-8948-0243aba9c518", APIVersion:"apps/v1", ResourceVersion:"352", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1226 22:08:07.993623       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6b755a6f-abd1-454c-bf87-f73bb4df476a", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-dzd9l
	I1226 22:08:16.566447       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1226 22:08:22.837420       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"957360b2-8fd2-4ffd-8700-fd88a36908c0", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1226 22:08:22.846464       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"01577136-9062-440e-88c5-4ad3b321118d", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-hlm6t
	I1226 22:08:22.889893       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"704b55dc-5d95-4810-a089-9c17048692f5", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-b8xk7
	I1226 22:08:22.921853       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9c82089e-e2ea-4ae1-b979-7023f696cf94", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-h7nr5
	
	
	==> kube-proxy [36c3a5e7fc3b0b3e62d89ddc70be43b8929f62a2440886ceded856e6e6596020] <==
	W1226 22:08:08.545246       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1226 22:08:08.588827       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1226 22:08:08.588928       1 server_others.go:186] Using iptables Proxier.
	I1226 22:08:08.589270       1 server.go:583] Version: v1.18.20
	I1226 22:08:08.590264       1 config.go:315] Starting service config controller
	I1226 22:08:08.590359       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1226 22:08:08.590505       1 config.go:133] Starting endpoints config controller
	I1226 22:08:08.590573       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1226 22:08:08.704609       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1226 22:08:08.704726       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [beac61d1ace3e9fcddf8defb7ffd81bc410cf8d57adc9293474065e9908c9ed9] <==
	I1226 22:07:48.901897       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1226 22:07:48.904202       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 22:07:48.904371       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 22:07:48.907823       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1226 22:07:48.907956       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1226 22:07:48.914836       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 22:07:48.914889       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1226 22:07:48.914961       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1226 22:07:48.915023       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 22:07:48.915093       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1226 22:07:48.915210       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:07:48.915657       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 22:07:48.915730       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1226 22:07:48.915794       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1226 22:07:48.915863       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 22:07:48.915922       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1226 22:07:48.919002       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 22:07:49.730841       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 22:07:49.772305       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:07:49.835906       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 22:07:49.923621       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1226 22:07:51.706906       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1226 22:08:07.440166       1 factory.go:503] pod: kube-system/coredns-66bff467f8-dzd9l is already present in unschedulable queue
	E1226 22:08:07.728987       1 factory.go:503] pod: kube-system/coredns-66bff467f8-lsmfr is already present in unschedulable queue
	E1226 22:08:08.428739       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	
	==> kubelet <==
	Dec 26 22:12:25 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:25.903019    1610 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:12:25 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:25.903078    1610 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:12:25 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:25.903146    1610 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:12:25 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:25.903180    1610 pod_workers.go:191] Error syncing pod 37921367-230b-4ba2-b651-69e165130c2f ("ingress-nginx-admission-patch-h7nr5_ingress-nginx(37921367-230b-4ba2-b651-69e165130c2f)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Dec 26 22:12:32 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:32.944642    1610 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
	Dec 26 22:12:32 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:32.944750    1610 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/d298fb87-574d-41c1-92d8-7e50b4c6c8b8-webhook-cert podName:d298fb87-574d-41c1-92d8-7e50b4c6c8b8 nodeName:}" failed. No retries permitted until 2023-12-26 22:14:34.944724772 +0000 UTC m=+403.219772662 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d298fb87-574d-41c1-92d8-7e50b4c6c8b8-webhook-cert\") pod \"ingress-nginx-controller-7fcf777cb7-hlm6t\" (UID: \"d298fb87-574d-41c1-92d8-7e50b4c6c8b8\") : secret \"ingress-nginx-admission\" not found"
	Dec 26 22:12:34 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:34.249061    1610 pod_workers.go:191] Error syncing pod a6cf9ebb-384e-437d-8920-aec4d8b9acd0 ("ingress-nginx-admission-create-b8xk7_ingress-nginx(a6cf9ebb-384e-437d-8920-aec4d8b9acd0)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:12:37 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:37.249864    1610 pod_workers.go:191] Error syncing pod 37921367-230b-4ba2-b651-69e165130c2f ("ingress-nginx-admission-patch-h7nr5_ingress-nginx(37921367-230b-4ba2-b651-69e165130c2f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:12:42 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:42.248172    1610 kubelet.go:1703] Unable to attach or mount volumes for pod "ingress-nginx-controller-7fcf777cb7-hlm6t_ingress-nginx(d298fb87-574d-41c1-92d8-7e50b4c6c8b8)": unmounted volumes=[webhook-cert], unattached volumes=[ingress-nginx-token-vrjrs webhook-cert]: timed out waiting for the condition; skipping pod
	Dec 26 22:12:42 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:42.248227    1610 pod_workers.go:191] Error syncing pod d298fb87-574d-41c1-92d8-7e50b4c6c8b8 ("ingress-nginx-controller-7fcf777cb7-hlm6t_ingress-nginx(d298fb87-574d-41c1-92d8-7e50b4c6c8b8)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[ingress-nginx-token-vrjrs webhook-cert]: timed out waiting for the condition
	Dec 26 22:12:50 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:50.248843    1610 pod_workers.go:191] Error syncing pod 37921367-230b-4ba2-b651-69e165130c2f ("ingress-nginx-admission-patch-h7nr5_ingress-nginx(37921367-230b-4ba2-b651-69e165130c2f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:12:55 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:12:55.300693    1610 container_manager_linux.go:512] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d, memory: /docker/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d/system.slice/kubelet.service
	Dec 26 22:13:05 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:13:05.249458    1610 pod_workers.go:191] Error syncing pod 37921367-230b-4ba2-b651-69e165130c2f ("ingress-nginx-admission-patch-h7nr5_ingress-nginx(37921367-230b-4ba2-b651-69e165130c2f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:13:32 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:13:32.756639    1610 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:13:32 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:13:32.756695    1610 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:13:32 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:13:32.756950    1610 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:13:32 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:13:32.756985    1610 pod_workers.go:191] Error syncing pod a6cf9ebb-384e-437d-8920-aec4d8b9acd0 ("ingress-nginx-admission-create-b8xk7_ingress-nginx(a6cf9ebb-384e-437d-8920-aec4d8b9acd0)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Dec 26 22:13:48 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:13:48.248957    1610 pod_workers.go:191] Error syncing pod a6cf9ebb-384e-437d-8920-aec4d8b9acd0 ("ingress-nginx-admission-create-b8xk7_ingress-nginx(a6cf9ebb-384e-437d-8920-aec4d8b9acd0)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:14:02 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:14:02.249177    1610 pod_workers.go:191] Error syncing pod a6cf9ebb-384e-437d-8920-aec4d8b9acd0 ("ingress-nginx-admission-create-b8xk7_ingress-nginx(a6cf9ebb-384e-437d-8920-aec4d8b9acd0)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:14:03 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:14:03.029790    1610 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:14:03 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:14:03.029858    1610 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:14:03 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:14:03.029926    1610 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:14:03 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:14:03.029961    1610 pod_workers.go:191] Error syncing pod 37921367-230b-4ba2-b651-69e165130c2f ("ingress-nginx-admission-patch-h7nr5_ingress-nginx(37921367-230b-4ba2-b651-69e165130c2f)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Dec 26 22:14:15 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:14:15.249460    1610 pod_workers.go:191] Error syncing pod a6cf9ebb-384e-437d-8920-aec4d8b9acd0 ("ingress-nginx-admission-create-b8xk7_ingress-nginx(a6cf9ebb-384e-437d-8920-aec4d8b9acd0)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:14:15 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:14:15.250726    1610 pod_workers.go:191] Error syncing pod 37921367-230b-4ba2-b651-69e165130c2f ("ingress-nginx-admission-patch-h7nr5_ingress-nginx(37921367-230b-4ba2-b651-69e165130c2f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	
	==> storage-provisioner [0aea528753119bda3cba01c417a6b9f286728e611aeb989175d4f1a81b799666] <==
	I1226 22:08:22.716387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1226 22:08:22.740244       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1226 22:08:22.740561       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1226 22:08:22.750546       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1226 22:08:22.750725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-324559_6f1b9ef8-eaf0-4794-970e-5e7767c735b3!
	I1226 22:08:22.751293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb789ba4-9e0f-40c3-82a2-46b1717003f4", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-324559_6f1b9ef8-eaf0-4794-970e-5e7767c735b3 became leader
	I1226 22:08:22.852250       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-324559_6f1b9ef8-eaf0-4794-970e-5e7767c735b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-324559 -n ingress-addon-legacy-324559
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-324559 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-b8xk7 ingress-nginx-admission-patch-h7nr5 ingress-nginx-controller-7fcf777cb7-hlm6t
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-324559 describe pod ingress-nginx-admission-create-b8xk7 ingress-nginx-admission-patch-h7nr5 ingress-nginx-controller-7fcf777cb7-hlm6t
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-324559 describe pod ingress-nginx-admission-create-b8xk7 ingress-nginx-admission-patch-h7nr5 ingress-nginx-controller-7fcf777cb7-hlm6t: exit status 1 (96.895341ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b8xk7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h7nr5" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-hlm6t" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-324559 describe pod ingress-nginx-admission-create-b8xk7 ingress-nginx-admission-patch-h7nr5 ingress-nginx-controller-7fcf777cb7-hlm6t: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (92.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-324559 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1226 22:14:35.008978  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
addons_test.go:207: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-324559 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (1m30.077493532s)

                                                
                                                
** stderr ** 
	error: timed out waiting for the condition on pods/ingress-nginx-controller-7fcf777cb7-hlm6t

                                                
                                                
** /stderr **
addons_test.go:208: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-324559
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-324559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d",
	        "Created": "2023-12-26T22:07:18.971854467Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 731166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T22:07:19.302540911Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d/hostname",
	        "HostsPath": "/var/lib/docker/containers/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d/hosts",
	        "LogPath": "/var/lib/docker/containers/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d/4094ae8e0876f71f500e3ad840801c838e42523ab5cdca760fd0587649ebf25d-json.log",
	        "Name": "/ingress-addon-legacy-324559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-324559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-324559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af62208332817967e38e21b2d62dcd3015730420e4acd6c9bdbc71d008674fa0-init/diff:/var/lib/docker/overlay2/45396a29879cab7c8a67d68e40c59b67c1c0ba964e9ed87a152af8cc5862c477/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af62208332817967e38e21b2d62dcd3015730420e4acd6c9bdbc71d008674fa0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af62208332817967e38e21b2d62dcd3015730420e4acd6c9bdbc71d008674fa0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af62208332817967e38e21b2d62dcd3015730420e4acd6c9bdbc71d008674fa0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-324559",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-324559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-324559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-324559",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-324559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3b543683e4e244300480628472cac8cc83ed7830cea43ebc7aa6f93cc64c660",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33686"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33685"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33682"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33684"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33683"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a3b543683e4e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-324559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4094ae8e0876",
	                        "ingress-addon-legacy-324559"
	                    ],
	                    "NetworkID": "b153fb06ea0ee03a524a832f3d32eaf518f15e5ce2de2b14e3e5d6521310ae6c",
	                    "EndpointID": "338cdf8169de82c124a9f0c9772d55f109e6584bf02d17db2755b29d7f83567d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-324559 -n ingress-addon-legacy-324559
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-324559 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-324559 logs -n 25: (1.399087464s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-262391 image ls                                             | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	| image          | functional-262391 image load                                           | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-262391 image ls                                             | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	| image          | functional-262391 image save --daemon                                  | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-262391               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/test/nested/copy/703036/hosts                                     |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/ssl/certs/703036.pem                                              |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /usr/share/ca-certificates/703036.pem                                  |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/ssl/certs/7030362.pem                                             |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /usr/share/ca-certificates/7030362.pem                                 |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh sudo cat                                         | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |         |                     |                     |
	| image          | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-262391 ssh pgrep                                            | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-262391 image build -t                                       | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | localhost/my-image:functional-262391                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-262391                                                      | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-262391 image ls                                             | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	| delete         | -p functional-262391                                                   | functional-262391           | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:06 UTC |
	| start          | -p ingress-addon-legacy-324559                                         | ingress-addon-legacy-324559 | jenkins | v1.32.0 | 26 Dec 23 22:06 UTC | 26 Dec 23 22:08 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-324559                                            | ingress-addon-legacy-324559 | jenkins | v1.32.0 | 26 Dec 23 22:08 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-324559                                            | ingress-addon-legacy-324559 | jenkins | v1.32.0 | 26 Dec 23 22:14 UTC | 26 Dec 23 22:14 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 22:06:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 22:06:59.345794  730714 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:06:59.345979  730714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:06:59.345988  730714 out.go:309] Setting ErrFile to fd 2...
	I1226 22:06:59.345994  730714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:06:59.346257  730714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:06:59.346708  730714 out.go:303] Setting JSON to false
	I1226 22:06:59.347567  730714 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20953,"bootTime":1703607466,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:06:59.347645  730714 start.go:138] virtualization:  
	I1226 22:06:59.350658  730714 out.go:177] * [ingress-addon-legacy-324559] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 22:06:59.353593  730714 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:06:59.356030  730714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:06:59.353727  730714 notify.go:220] Checking for updates...
	I1226 22:06:59.360930  730714 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:06:59.363563  730714 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:06:59.366338  730714 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:06:59.369035  730714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:06:59.371569  730714 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:06:59.395736  730714 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:06:59.395850  730714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:06:59.483957  730714 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-26 22:06:59.473710817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:06:59.484053  730714 docker.go:295] overlay module found
	I1226 22:06:59.486771  730714 out.go:177] * Using the docker driver based on user configuration
	I1226 22:06:59.488708  730714 start.go:298] selected driver: docker
	I1226 22:06:59.488744  730714 start.go:902] validating driver "docker" against <nil>
	I1226 22:06:59.488759  730714 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:06:59.489362  730714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:06:59.555423  730714 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-26 22:06:59.546108653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:06:59.555590  730714 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 22:06:59.555837  730714 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 22:06:59.558277  730714 out.go:177] * Using Docker driver with root privileges
	I1226 22:06:59.560324  730714 cni.go:84] Creating CNI manager for ""
	I1226 22:06:59.560348  730714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:06:59.560361  730714 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 22:06:59.560375  730714 start_flags.go:323] config:
	{Name:ingress-addon-legacy-324559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-324559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:06:59.562857  730714 out.go:177] * Starting control plane node ingress-addon-legacy-324559 in cluster ingress-addon-legacy-324559
	I1226 22:06:59.565149  730714 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:06:59.567241  730714 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:06:59.569226  730714 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 22:06:59.569312  730714 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 22:06:59.586379  730714 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 22:06:59.586405  730714 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 22:06:59.636789  730714 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1226 22:06:59.636825  730714 cache.go:56] Caching tarball of preloaded images
	I1226 22:06:59.637009  730714 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 22:06:59.639485  730714 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1226 22:06:59.641684  730714 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1226 22:06:59.753039  730714 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1226 22:07:11.060217  730714 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1226 22:07:11.060321  730714 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1226 22:07:12.250416  730714 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1226 22:07:12.250810  730714 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/config.json ...
	I1226 22:07:12.250843  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/config.json: {Name:mk79c37621425bb429e102f6d976700ae00d3f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:12.251036  730714 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:07:12.251100  730714 start.go:365] acquiring machines lock for ingress-addon-legacy-324559: {Name:mk486fccab415ae2bf346d53fa0d55b82bd64c36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:07:12.251169  730714 start.go:369] acquired machines lock for "ingress-addon-legacy-324559" in 48.458µs
	I1226 22:07:12.251191  730714 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-324559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-324559 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 22:07:12.251265  730714 start.go:125] createHost starting for "" (driver="docker")
	I1226 22:07:12.253787  730714 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1226 22:07:12.254086  730714 start.go:159] libmachine.API.Create for "ingress-addon-legacy-324559" (driver="docker")
	I1226 22:07:12.254114  730714 client.go:168] LocalClient.Create starting
	I1226 22:07:12.254176  730714 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem
	I1226 22:07:12.254238  730714 main.go:141] libmachine: Decoding PEM data...
	I1226 22:07:12.254257  730714 main.go:141] libmachine: Parsing certificate...
	I1226 22:07:12.254306  730714 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem
	I1226 22:07:12.254329  730714 main.go:141] libmachine: Decoding PEM data...
	I1226 22:07:12.254344  730714 main.go:141] libmachine: Parsing certificate...
	I1226 22:07:12.254766  730714 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-324559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 22:07:12.272705  730714 cli_runner.go:211] docker network inspect ingress-addon-legacy-324559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 22:07:12.272790  730714 network_create.go:281] running [docker network inspect ingress-addon-legacy-324559] to gather additional debugging logs...
	I1226 22:07:12.272811  730714 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-324559
	W1226 22:07:12.290733  730714 cli_runner.go:211] docker network inspect ingress-addon-legacy-324559 returned with exit code 1
	I1226 22:07:12.290768  730714 network_create.go:284] error running [docker network inspect ingress-addon-legacy-324559]: docker network inspect ingress-addon-legacy-324559: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-324559 not found
	I1226 22:07:12.290785  730714 network_create.go:286] output of [docker network inspect ingress-addon-legacy-324559]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-324559 not found
	
	** /stderr **
	I1226 22:07:12.290881  730714 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:07:12.308151  730714 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004e0240}
	I1226 22:07:12.308198  730714 network_create.go:124] attempt to create docker network ingress-addon-legacy-324559 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1226 22:07:12.308256  730714 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 ingress-addon-legacy-324559
	I1226 22:07:12.384279  730714 network_create.go:108] docker network ingress-addon-legacy-324559 192.168.49.0/24 created
	I1226 22:07:12.384313  730714 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-324559" container
	I1226 22:07:12.384413  730714 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 22:07:12.401520  730714 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-324559 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --label created_by.minikube.sigs.k8s.io=true
	I1226 22:07:12.420464  730714 oci.go:103] Successfully created a docker volume ingress-addon-legacy-324559
	I1226 22:07:12.420618  730714 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-324559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --entrypoint /usr/bin/test -v ingress-addon-legacy-324559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 22:07:13.921405  730714 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-324559-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --entrypoint /usr/bin/test -v ingress-addon-legacy-324559:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib: (1.500745419s)
	I1226 22:07:13.921441  730714 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-324559
	I1226 22:07:13.921468  730714 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 22:07:13.921487  730714 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 22:07:13.921573  730714 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-324559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 22:07:18.882381  730714 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-324559:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.960763672s)
	I1226 22:07:18.882413  730714 kic.go:203] duration metric: took 4.960924 seconds to extract preloaded images to volume
	W1226 22:07:18.882557  730714 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 22:07:18.882667  730714 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 22:07:18.954854  730714 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-324559 --name ingress-addon-legacy-324559 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-324559 --network ingress-addon-legacy-324559 --ip 192.168.49.2 --volume ingress-addon-legacy-324559:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 22:07:19.311019  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Running}}
	I1226 22:07:19.335024  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:07:19.358303  730714 cli_runner.go:164] Run: docker exec ingress-addon-legacy-324559 stat /var/lib/dpkg/alternatives/iptables
	I1226 22:07:19.441188  730714 oci.go:144] the created container "ingress-addon-legacy-324559" has a running status.
	I1226 22:07:19.441223  730714 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa...
	I1226 22:07:20.027884  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1226 22:07:20.027941  730714 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 22:07:20.068037  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:07:20.096273  730714 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 22:07:20.096301  730714 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-324559 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 22:07:20.178375  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:07:20.217007  730714 machine.go:88] provisioning docker machine ...
	I1226 22:07:20.217040  730714 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-324559"
	I1226 22:07:20.217109  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:20.250112  730714 main.go:141] libmachine: Using SSH client type: native
	I1226 22:07:20.250552  730714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33686 <nil> <nil>}
	I1226 22:07:20.250573  730714 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-324559 && echo "ingress-addon-legacy-324559" | sudo tee /etc/hostname
	I1226 22:07:20.428170  730714 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-324559
	
	I1226 22:07:20.428311  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:20.459683  730714 main.go:141] libmachine: Using SSH client type: native
	I1226 22:07:20.460082  730714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33686 <nil> <nil>}
	I1226 22:07:20.460106  730714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-324559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-324559/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-324559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:07:20.609941  730714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:07:20.609968  730714 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 22:07:20.609997  730714 ubuntu.go:177] setting up certificates
	I1226 22:07:20.610011  730714 provision.go:83] configureAuth start
	I1226 22:07:20.610072  730714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-324559
	I1226 22:07:20.633095  730714 provision.go:138] copyHostCerts
	I1226 22:07:20.633152  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:07:20.633184  730714 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem, removing ...
	I1226 22:07:20.633196  730714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:07:20.633265  730714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 22:07:20.633362  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:07:20.633387  730714 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem, removing ...
	I1226 22:07:20.633395  730714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:07:20.633422  730714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 22:07:20.633478  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:07:20.633502  730714 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem, removing ...
	I1226 22:07:20.633510  730714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:07:20.633536  730714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 22:07:20.633584  730714 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-324559 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-324559]
	I1226 22:07:20.981925  730714 provision.go:172] copyRemoteCerts
	I1226 22:07:20.982025  730714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:07:20.982076  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.000912  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.103433  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1226 22:07:21.103497  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:07:21.133442  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1226 22:07:21.133513  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1226 22:07:21.165992  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1226 22:07:21.166100  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:07:21.196658  730714 provision.go:86] duration metric: configureAuth took 586.612094ms
	I1226 22:07:21.196725  730714 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:07:21.196941  730714 config.go:182] Loaded profile config "ingress-addon-legacy-324559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1226 22:07:21.197051  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.218499  730714 main.go:141] libmachine: Using SSH client type: native
	I1226 22:07:21.218934  730714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33686 <nil> <nil>}
	I1226 22:07:21.218962  730714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:07:21.502073  730714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:07:21.502098  730714 machine.go:91] provisioned docker machine in 1.285068182s
	I1226 22:07:21.502107  730714 client.go:171] LocalClient.Create took 9.247985258s
	I1226 22:07:21.502121  730714 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-324559" took 9.248036342s
	I1226 22:07:21.502128  730714 start.go:300] post-start starting for "ingress-addon-legacy-324559" (driver="docker")
	I1226 22:07:21.502141  730714 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:07:21.502208  730714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:07:21.502258  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.520366  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.624815  730714 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:07:21.629306  730714 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:07:21.629343  730714 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:07:21.629355  730714 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:07:21.629362  730714 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 22:07:21.629376  730714 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 22:07:21.629448  730714 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 22:07:21.629557  730714 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> 7030362.pem in /etc/ssl/certs
	I1226 22:07:21.629568  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> /etc/ssl/certs/7030362.pem
	I1226 22:07:21.629707  730714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:07:21.641283  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:07:21.670457  730714 start.go:303] post-start completed in 168.310587ms
	I1226 22:07:21.670845  730714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-324559
	I1226 22:07:21.689032  730714 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/config.json ...
	I1226 22:07:21.689422  730714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:07:21.689474  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.711505  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.807132  730714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:07:21.813668  730714 start.go:128] duration metric: createHost completed in 9.562376059s
	I1226 22:07:21.813698  730714 start.go:83] releasing machines lock for "ingress-addon-legacy-324559", held for 9.562518489s
	I1226 22:07:21.813792  730714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-324559
	I1226 22:07:21.831415  730714 ssh_runner.go:195] Run: cat /version.json
	I1226 22:07:21.831426  730714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:07:21.831481  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.831486  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:07:21.852418  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.862296  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:07:21.949110  730714 ssh_runner.go:195] Run: systemctl --version
	I1226 22:07:22.088501  730714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:07:22.237809  730714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:07:22.243226  730714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:07:22.266246  730714 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:07:22.266333  730714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:07:22.309056  730714 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 22:07:22.309084  730714 start.go:475] detecting cgroup driver to use...
	I1226 22:07:22.309117  730714 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:07:22.309170  730714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:07:22.328613  730714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:07:22.342786  730714 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:07:22.342852  730714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:07:22.358160  730714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:07:22.374425  730714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 22:07:22.485238  730714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:07:22.595842  730714 docker.go:219] disabling docker service ...
	I1226 22:07:22.595938  730714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:07:22.618459  730714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:07:22.631801  730714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:07:22.730761  730714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:07:22.835661  730714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:07:22.850646  730714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:07:22.872192  730714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1226 22:07:22.872286  730714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:07:22.888421  730714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 22:07:22.888593  730714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:07:22.901901  730714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:07:22.914541  730714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:07:22.927634  730714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 22:07:22.939613  730714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 22:07:22.950805  730714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 22:07:22.960983  730714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:07:23.051160  730714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 22:07:23.176058  730714 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 22:07:23.176156  730714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 22:07:23.180841  730714 start.go:543] Will wait 60s for crictl version
	I1226 22:07:23.180924  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:23.185416  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 22:07:23.228100  730714 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 22:07:23.228241  730714 ssh_runner.go:195] Run: crio --version
	I1226 22:07:23.272314  730714 ssh_runner.go:195] Run: crio --version
	I1226 22:07:23.321818  730714 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1226 22:07:23.323678  730714 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-324559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:07:23.341216  730714 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1226 22:07:23.345878  730714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:07:23.359804  730714 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1226 22:07:23.359875  730714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:07:23.412973  730714 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1226 22:07:23.413047  730714 ssh_runner.go:195] Run: which lz4
	I1226 22:07:23.417471  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1226 22:07:23.417570  730714 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1226 22:07:23.421705  730714 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1226 22:07:23.421740  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1226 22:07:25.598401  730714 crio.go:444] Took 2.180869 seconds to copy over tarball
	I1226 22:07:25.598478  730714 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1226 22:07:28.274242  730714 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.675734587s)
	I1226 22:07:28.274266  730714 crio.go:451] Took 2.675841 seconds to extract the tarball
	I1226 22:07:28.274276  730714 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1226 22:07:28.359451  730714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:07:28.399196  730714 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1226 22:07:28.399226  730714 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1226 22:07:28.399276  730714 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:07:28.399503  730714 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 22:07:28.399581  730714 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 22:07:28.399662  730714 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 22:07:28.399734  730714 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:28.399791  730714 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1226 22:07:28.399845  730714 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:28.399906  730714 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1226 22:07:28.400837  730714 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 22:07:28.401241  730714 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1226 22:07:28.401394  730714 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:28.401527  730714 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1226 22:07:28.401662  730714 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:28.401782  730714 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 22:07:28.401911  730714 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 22:07:28.402035  730714 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W1226 22:07:28.758142  730714 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.758398  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1226 22:07:28.763029  730714 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.763261  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1226 22:07:28.779656  730714 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	W1226 22:07:28.779751  730714 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.779856  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:28.779981  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1226 22:07:28.786689  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1226 22:07:28.793487  730714 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.793742  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1226 22:07:28.804435  730714 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.804801  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:28.899832  730714 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1226 22:07:28.899892  730714 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 22:07:28.899947  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:28.909229  730714 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1226 22:07:28.909270  730714 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1226 22:07:28.909321  730714 ssh_runner.go:195] Run: which crictl
	W1226 22:07:28.933249  730714 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1226 22:07:28.933414  730714 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:07:28.959519  730714 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1226 22:07:28.959562  730714 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:28.959618  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:28.966878  730714 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1226 22:07:28.966915  730714 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1226 22:07:28.966967  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:28.967044  730714 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1226 22:07:28.967067  730714 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1226 22:07:28.967092  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:29.004950  730714 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1226 22:07:29.004999  730714 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1226 22:07:29.005052  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:29.015013  730714 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1226 22:07:29.015059  730714 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:29.015141  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:29.015220  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1226 22:07:29.015280  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1226 22:07:29.139634  730714 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1226 22:07:29.139682  730714 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:07:29.139732  730714 ssh_runner.go:195] Run: which crictl
	I1226 22:07:29.139832  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1226 22:07:29.139863  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1226 22:07:29.139917  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1226 22:07:29.140010  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1226 22:07:29.140054  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1226 22:07:29.140102  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1226 22:07:29.140151  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1226 22:07:29.288542  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1226 22:07:29.288599  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1226 22:07:29.288643  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1226 22:07:29.288687  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1226 22:07:29.288720  730714 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:07:29.288731  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1226 22:07:29.362144  730714 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1226 22:07:29.362258  730714 cache_images.go:92] LoadImages completed in 963.016699ms
	W1226 22:07:29.362330  730714 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I1226 22:07:29.362404  730714 ssh_runner.go:195] Run: crio config
	I1226 22:07:29.424895  730714 cni.go:84] Creating CNI manager for ""
	I1226 22:07:29.424919  730714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:07:29.424973  730714 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 22:07:29.425000  730714 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-324559 NodeName:ingress-addon-legacy-324559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1226 22:07:29.425192  730714 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-324559"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 22:07:29.425284  730714 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-324559 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-324559 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 22:07:29.425391  730714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1226 22:07:29.436322  730714 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 22:07:29.436435  730714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 22:07:29.447354  730714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1226 22:07:29.468756  730714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1226 22:07:29.490440  730714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1226 22:07:29.511947  730714 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1226 22:07:29.516631  730714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:07:29.530156  730714 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559 for IP: 192.168.49.2
	I1226 22:07:29.530189  730714 certs.go:190] acquiring lock for shared ca certs: {Name:mke6488a150c186a525017f74b8a69a9f5240d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:29.530384  730714 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key
	I1226 22:07:29.530430  730714 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key
	I1226 22:07:29.530487  730714 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key
	I1226 22:07:29.530501  730714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt with IP's: []
	I1226 22:07:29.972524  730714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt ...
	I1226 22:07:29.972555  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: {Name:mk8a22fe6e0abd719a82f98f1fe6479d73ab1657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:29.972755  730714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key ...
	I1226 22:07:29.972769  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key: {Name:mk9d727a6f66199779723590026fbcb60bde4dcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:29.972858  730714 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key.dd3b5fb2
	I1226 22:07:29.972876  730714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 22:07:30.419952  730714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt.dd3b5fb2 ...
	I1226 22:07:30.419983  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt.dd3b5fb2: {Name:mk00e9e189795c3e50287394df7a3f2d3d3de7ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:30.420176  730714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key.dd3b5fb2 ...
	I1226 22:07:30.420190  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key.dd3b5fb2: {Name:mk770b7073fc81eb854c1f0707b6a612caa7058b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:30.420274  730714 certs.go:337] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt
	I1226 22:07:30.420350  730714 certs.go:341] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key
	I1226 22:07:30.420410  730714 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key
	I1226 22:07:30.420426  730714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt with IP's: []
	I1226 22:07:30.598927  730714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt ...
	I1226 22:07:30.598959  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt: {Name:mkb3c8029447d34cff0a9e60b1b875fff68b3905 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:30.599145  730714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key ...
	I1226 22:07:30.599158  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key: {Name:mk3a636739633a15143f255b034abd0774f605b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:07:30.599242  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1226 22:07:30.599262  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1226 22:07:30.599274  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1226 22:07:30.599290  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1226 22:07:30.599301  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 22:07:30.599316  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1226 22:07:30.599327  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 22:07:30.599341  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 22:07:30.599409  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem (1338 bytes)
	W1226 22:07:30.599444  730714 certs.go:433] ignoring /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036_empty.pem, impossibly tiny 0 bytes
	I1226 22:07:30.599458  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem (1675 bytes)
	I1226 22:07:30.599491  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem (1082 bytes)
	I1226 22:07:30.599518  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem (1123 bytes)
	I1226 22:07:30.599550  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem (1679 bytes)
	I1226 22:07:30.599598  730714 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:07:30.599631  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem -> /usr/share/ca-certificates/703036.pem
	I1226 22:07:30.599652  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> /usr/share/ca-certificates/7030362.pem
	I1226 22:07:30.599667  730714 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:07:30.600269  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 22:07:30.629179  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1226 22:07:30.658193  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 22:07:30.686902  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1226 22:07:30.715874  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 22:07:30.745033  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 22:07:30.773521  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 22:07:30.802255  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 22:07:30.831162  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem --> /usr/share/ca-certificates/703036.pem (1338 bytes)
	I1226 22:07:30.860045  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /usr/share/ca-certificates/7030362.pem (1708 bytes)
	I1226 22:07:30.888772  730714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 22:07:30.918212  730714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 22:07:30.940057  730714 ssh_runner.go:195] Run: openssl version
	I1226 22:07:30.947558  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/703036.pem && ln -fs /usr/share/ca-certificates/703036.pem /etc/ssl/certs/703036.pem"
	I1226 22:07:30.959317  730714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/703036.pem
	I1226 22:07:30.964090  730714 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 21:58 /usr/share/ca-certificates/703036.pem
	I1226 22:07:30.964198  730714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/703036.pem
	I1226 22:07:30.972892  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/703036.pem /etc/ssl/certs/51391683.0"
	I1226 22:07:30.984569  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7030362.pem && ln -fs /usr/share/ca-certificates/7030362.pem /etc/ssl/certs/7030362.pem"
	I1226 22:07:30.996374  730714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7030362.pem
	I1226 22:07:31.002047  730714 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 21:58 /usr/share/ca-certificates/7030362.pem
	I1226 22:07:31.002130  730714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7030362.pem
	I1226 22:07:31.013541  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7030362.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 22:07:31.026013  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 22:07:31.038083  730714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:07:31.042849  730714 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:07:31.042947  730714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:07:31.051799  730714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 22:07:31.063890  730714 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 22:07:31.068613  730714 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:07:31.068670  730714 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-324559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-324559 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:07:31.068745  730714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 22:07:31.068810  730714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 22:07:31.112837  730714 cri.go:89] found id: ""
	I1226 22:07:31.112923  730714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 22:07:31.124157  730714 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 22:07:31.135237  730714 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 22:07:31.135345  730714 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 22:07:31.146460  730714 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 22:07:31.146505  730714 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 22:07:31.201579  730714 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1226 22:07:31.201934  730714 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 22:07:31.254257  730714 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1226 22:07:31.254329  730714 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1226 22:07:31.254373  730714 kubeadm.go:322] OS: Linux
	I1226 22:07:31.254422  730714 kubeadm.go:322] CGROUPS_CPU: enabled
	I1226 22:07:31.254472  730714 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1226 22:07:31.254521  730714 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1226 22:07:31.254570  730714 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1226 22:07:31.254620  730714 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1226 22:07:31.254674  730714 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1226 22:07:31.348764  730714 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 22:07:31.348956  730714 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 22:07:31.349114  730714 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 22:07:31.590536  730714 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:07:31.592273  730714 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:07:31.592511  730714 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 22:07:31.701044  730714 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 22:07:31.705920  730714 out.go:204]   - Generating certificates and keys ...
	I1226 22:07:31.706030  730714 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 22:07:31.706122  730714 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 22:07:31.980294  730714 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 22:07:32.161749  730714 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 22:07:32.795328  730714 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 22:07:33.170219  730714 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 22:07:33.728883  730714 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 22:07:33.729478  730714 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-324559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 22:07:34.153482  730714 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 22:07:34.153709  730714 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-324559 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1226 22:07:34.410856  730714 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 22:07:34.690243  730714 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 22:07:35.559562  730714 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 22:07:35.559885  730714 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 22:07:36.412816  730714 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 22:07:36.962283  730714 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 22:07:37.414406  730714 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 22:07:37.882580  730714 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 22:07:37.883346  730714 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 22:07:37.885831  730714 out.go:204]   - Booting up control plane ...
	I1226 22:07:37.885946  730714 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 22:07:37.892833  730714 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 22:07:37.894707  730714 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 22:07:37.902287  730714 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 22:07:37.909625  730714 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 22:07:50.412846  730714 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.503047 seconds
	I1226 22:07:50.412961  730714 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 22:07:50.426955  730714 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 22:07:50.949310  730714 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 22:07:50.949471  730714 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-324559 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1226 22:07:51.458499  730714 kubeadm.go:322] [bootstrap-token] Using token: yb40p9.8404dfnyapvgue80
	I1226 22:07:51.460820  730714 out.go:204]   - Configuring RBAC rules ...
	I1226 22:07:51.460941  730714 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 22:07:51.466516  730714 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 22:07:51.474505  730714 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 22:07:51.477881  730714 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 22:07:51.481201  730714 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 22:07:51.484959  730714 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 22:07:51.496930  730714 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 22:07:51.840700  730714 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 22:07:51.936915  730714 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 22:07:51.941469  730714 kubeadm.go:322] 
	I1226 22:07:51.941562  730714 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 22:07:51.941582  730714 kubeadm.go:322] 
	I1226 22:07:51.941655  730714 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 22:07:51.941663  730714 kubeadm.go:322] 
	I1226 22:07:51.941691  730714 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 22:07:51.941749  730714 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 22:07:51.941807  730714 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 22:07:51.941816  730714 kubeadm.go:322] 
	I1226 22:07:51.941866  730714 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 22:07:51.941948  730714 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 22:07:51.942019  730714 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 22:07:51.942030  730714 kubeadm.go:322] 
	I1226 22:07:51.942119  730714 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 22:07:51.942194  730714 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 22:07:51.942201  730714 kubeadm.go:322] 
	I1226 22:07:51.942311  730714 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token yb40p9.8404dfnyapvgue80 \
	I1226 22:07:51.942422  730714 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 \
	I1226 22:07:51.942446  730714 kubeadm.go:322]     --control-plane 
	I1226 22:07:51.942452  730714 kubeadm.go:322] 
	I1226 22:07:51.942531  730714 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 22:07:51.942545  730714 kubeadm.go:322] 
	I1226 22:07:51.942623  730714 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token yb40p9.8404dfnyapvgue80 \
	I1226 22:07:51.942723  730714 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 
	I1226 22:07:51.942887  730714 kubeadm.go:322] W1226 22:07:31.200684    1226 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1226 22:07:51.943119  730714 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1226 22:07:51.943263  730714 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:07:51.943401  730714 kubeadm.go:322] W1226 22:07:37.892502    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 22:07:51.943529  730714 kubeadm.go:322] W1226 22:07:37.894877    1226 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1226 22:07:51.943536  730714 cni.go:84] Creating CNI manager for ""
	I1226 22:07:51.943544  730714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:07:51.945631  730714 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 22:07:51.947621  730714 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 22:07:51.952994  730714 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1226 22:07:51.953021  730714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 22:07:51.983255  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 22:07:52.441372  730714 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 22:07:52.441466  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:52.441489  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=ingress-addon-legacy-324559 minikube.k8s.io/updated_at=2023_12_26T22_07_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:52.462949  730714 ops.go:34] apiserver oom_adj: -16
	I1226 22:07:52.567458  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:53.067764  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:53.568299  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:54.067689  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:54.568116  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:55.068577  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:55.568346  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:56.067674  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:56.567557  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:57.067607  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:57.568469  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:58.068402  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:58.567679  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:59.068180  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:07:59.568323  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:00.068649  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:00.567634  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:01.068270  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:01.568062  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:02.067967  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:02.567788  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:03.067915  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:03.568493  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:04.068568  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:04.567595  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:05.068312  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:05.567741  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:06.067579  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:06.567563  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:07.068314  730714 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:08:07.171114  730714 kubeadm.go:1088] duration metric: took 14.729731541s to wait for elevateKubeSystemPrivileges.
	I1226 22:08:07.171170  730714 kubeadm.go:406] StartCluster complete in 36.102505887s
	I1226 22:08:07.171187  730714 settings.go:142] acquiring lock: {Name:mk1b89d623875ac96830001bdd0fc2b8d8c10aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:08:07.171248  730714 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:08:07.171995  730714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/kubeconfig: {Name:mk171fc32e21f516abb68bc5ebeb628b3c1d7f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:08:07.172585  730714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 22:08:07.172886  730714 config.go:182] Loaded profile config "ingress-addon-legacy-324559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1226 22:08:07.173036  730714 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1226 22:08:07.173187  730714 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-324559"
	I1226 22:08:07.173210  730714 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-324559"
	I1226 22:08:07.173247  730714 host.go:66] Checking if "ingress-addon-legacy-324559" exists ...
	I1226 22:08:07.173152  730714 kapi.go:59] client config for ingress-addon-legacy-324559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:08:07.173743  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:08:07.174270  730714 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-324559"
	I1226 22:08:07.174297  730714 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-324559"
	I1226 22:08:07.174587  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:08:07.174855  730714 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 22:08:07.228315  730714 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:08:07.226991  730714 kapi.go:59] client config for ingress-addon-legacy-324559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:08:07.230861  730714 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:08:07.230880  730714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 22:08:07.230945  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:08:07.230973  730714 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-324559"
	I1226 22:08:07.231005  730714 host.go:66] Checking if "ingress-addon-legacy-324559" exists ...
	I1226 22:08:07.231494  730714 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-324559 --format={{.State.Status}}
	I1226 22:08:07.264296  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:08:07.281609  730714 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 22:08:07.281631  730714 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 22:08:07.281693  730714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-324559
	I1226 22:08:07.305369  730714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33686 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/ingress-addon-legacy-324559/id_rsa Username:docker}
	I1226 22:08:07.494641  730714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:08:07.524329  730714 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 22:08:07.569614  730714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 22:08:07.693712  730714 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-324559" context rescaled to 1 replicas
	I1226 22:08:07.693755  730714 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 22:08:07.695994  730714 out.go:177] * Verifying Kubernetes components...
	I1226 22:08:07.698790  730714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:08:08.380254  730714 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1226 22:08:08.380889  730714 kapi.go:59] client config for ingress-addon-legacy-324559: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:08:08.381145  730714 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-324559" to be "Ready" ...
	I1226 22:08:08.464566  730714 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1226 22:08:08.466871  730714 addons.go:508] enable addons completed in 1.293843953s: enabled=[storage-provisioner default-storageclass]
	I1226 22:08:10.384868  730714 node_ready.go:58] node "ingress-addon-legacy-324559" has status "Ready":"False"
	I1226 22:08:12.384914  730714 node_ready.go:58] node "ingress-addon-legacy-324559" has status "Ready":"False"
	I1226 22:08:14.885051  730714 node_ready.go:58] node "ingress-addon-legacy-324559" has status "Ready":"False"
	I1226 22:08:15.384900  730714 node_ready.go:49] node "ingress-addon-legacy-324559" has status "Ready":"True"
	I1226 22:08:15.384926  730714 node_ready.go:38] duration metric: took 7.00376792s waiting for node "ingress-addon-legacy-324559" to be "Ready" ...
	I1226 22:08:15.384935  730714 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:08:15.394258  730714 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:17.397665  730714 pod_ready.go:102] pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-26 22:08:07 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1226 22:08:19.400834  730714 pod_ready.go:102] pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace has status "Ready":"False"
	I1226 22:08:20.401095  730714 pod_ready.go:92] pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.401122  730714 pod_ready.go:81] duration metric: took 5.006826566s waiting for pod "coredns-66bff467f8-lsmfr" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.401134  730714 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.413464  730714 pod_ready.go:92] pod "etcd-ingress-addon-legacy-324559" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.413497  730714 pod_ready.go:81] duration metric: took 12.353579ms waiting for pod "etcd-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.413520  730714 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.419526  730714 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-324559" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.419553  730714 pod_ready.go:81] duration metric: took 6.02474ms waiting for pod "kube-apiserver-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.419565  730714 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.426152  730714 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-324559" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.426221  730714 pod_ready.go:81] duration metric: took 6.647314ms waiting for pod "kube-controller-manager-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.426249  730714 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nv5jt" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.442495  730714 pod_ready.go:92] pod "kube-proxy-nv5jt" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.442558  730714 pod_ready.go:81] duration metric: took 16.288192ms waiting for pod "kube-proxy-nv5jt" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.442586  730714 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.596001  730714 request.go:629] Waited for 153.269152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-324559
	I1226 22:08:20.795952  730714 request.go:629] Waited for 197.313162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-324559
	I1226 22:08:20.798863  730714 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-324559" in "kube-system" namespace has status "Ready":"True"
	I1226 22:08:20.798892  730714 pod_ready.go:81] duration metric: took 356.282154ms waiting for pod "kube-scheduler-ingress-addon-legacy-324559" in "kube-system" namespace to be "Ready" ...
	I1226 22:08:20.798906  730714 pod_ready.go:38] duration metric: took 5.413959031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:08:20.798920  730714 api_server.go:52] waiting for apiserver process to appear ...
	I1226 22:08:20.798983  730714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:08:20.812456  730714 api_server.go:72] duration metric: took 13.118667363s to wait for apiserver process to appear ...
	I1226 22:08:20.812580  730714 api_server.go:88] waiting for apiserver healthz status ...
	I1226 22:08:20.812615  730714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1226 22:08:20.821561  730714 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1226 22:08:20.822511  730714 api_server.go:141] control plane version: v1.18.20
	I1226 22:08:20.822539  730714 api_server.go:131] duration metric: took 9.935075ms to wait for apiserver health ...
	I1226 22:08:20.822548  730714 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 22:08:20.995947  730714 request.go:629] Waited for 173.311235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:08:21.001858  730714 system_pods.go:59] 8 kube-system pods found
	I1226 22:08:21.001899  730714 system_pods.go:61] "coredns-66bff467f8-lsmfr" [57d86f7d-5932-4ab4-ab83-a9ffd33cbc12] Running
	I1226 22:08:21.001906  730714 system_pods.go:61] "etcd-ingress-addon-legacy-324559" [787ba3f5-4dcb-4c02-99cd-b635e2a60d83] Running
	I1226 22:08:21.001913  730714 system_pods.go:61] "kindnet-xp2bf" [53d917f0-8851-4f9a-95bd-ecf62017fc1d] Running
	I1226 22:08:21.001919  730714 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-324559" [4f6831ee-03a7-4a54-9b62-0a3af3624f26] Running
	I1226 22:08:21.001925  730714 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-324559" [ffb495d2-e0d1-4362-9ee1-b9809c41b8b0] Running
	I1226 22:08:21.001930  730714 system_pods.go:61] "kube-proxy-nv5jt" [98081056-1e5f-4ad8-bb67-7da69b2e48c3] Running
	I1226 22:08:21.001936  730714 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-324559" [b12f331f-5253-4e48-bf19-0a649cc5a6a7] Running
	I1226 22:08:21.001945  730714 system_pods.go:61] "storage-provisioner" [e04e3e5c-a9ca-4733-b61d-aa5e4f84a94c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1226 22:08:21.001954  730714 system_pods.go:74] duration metric: took 179.379412ms to wait for pod list to return data ...
	I1226 22:08:21.001965  730714 default_sa.go:34] waiting for default service account to be created ...
	I1226 22:08:21.196372  730714 request.go:629] Waited for 194.29737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1226 22:08:21.200821  730714 default_sa.go:45] found service account: "default"
	I1226 22:08:21.200851  730714 default_sa.go:55] duration metric: took 198.880182ms for default service account to be created ...
	I1226 22:08:21.200861  730714 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 22:08:21.396245  730714 request.go:629] Waited for 195.32344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:08:21.402047  730714 system_pods.go:86] 8 kube-system pods found
	I1226 22:08:21.402081  730714 system_pods.go:89] "coredns-66bff467f8-lsmfr" [57d86f7d-5932-4ab4-ab83-a9ffd33cbc12] Running
	I1226 22:08:21.402089  730714 system_pods.go:89] "etcd-ingress-addon-legacy-324559" [787ba3f5-4dcb-4c02-99cd-b635e2a60d83] Running
	I1226 22:08:21.402094  730714 system_pods.go:89] "kindnet-xp2bf" [53d917f0-8851-4f9a-95bd-ecf62017fc1d] Running
	I1226 22:08:21.402129  730714 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-324559" [4f6831ee-03a7-4a54-9b62-0a3af3624f26] Running
	I1226 22:08:21.402140  730714 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-324559" [ffb495d2-e0d1-4362-9ee1-b9809c41b8b0] Running
	I1226 22:08:21.402146  730714 system_pods.go:89] "kube-proxy-nv5jt" [98081056-1e5f-4ad8-bb67-7da69b2e48c3] Running
	I1226 22:08:21.402155  730714 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-324559" [b12f331f-5253-4e48-bf19-0a649cc5a6a7] Running
	I1226 22:08:21.402162  730714 system_pods.go:89] "storage-provisioner" [e04e3e5c-a9ca-4733-b61d-aa5e4f84a94c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1226 22:08:21.402173  730714 system_pods.go:126] duration metric: took 201.305784ms to wait for k8s-apps to be running ...
	I1226 22:08:21.402181  730714 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 22:08:21.402253  730714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:08:21.420968  730714 system_svc.go:56] duration metric: took 18.776471ms WaitForService to wait for kubelet.
	I1226 22:08:21.420996  730714 kubeadm.go:581] duration metric: took 13.727213646s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 22:08:21.421017  730714 node_conditions.go:102] verifying NodePressure condition ...
	I1226 22:08:21.596404  730714 request.go:629] Waited for 175.303082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1226 22:08:21.599442  730714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1226 22:08:21.599474  730714 node_conditions.go:123] node cpu capacity is 2
	I1226 22:08:21.599486  730714 node_conditions.go:105] duration metric: took 178.463092ms to run NodePressure ...
	I1226 22:08:21.599517  730714 start.go:228] waiting for startup goroutines ...
	I1226 22:08:21.599529  730714 start.go:233] waiting for cluster config update ...
	I1226 22:08:21.599539  730714 start.go:242] writing updated cluster config ...
	I1226 22:08:21.599826  730714 ssh_runner.go:195] Run: rm -f paused
	I1226 22:08:21.666507  730714 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I1226 22:08:21.669368  730714 out.go:177] 
	W1226 22:08:21.671674  730714 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I1226 22:08:21.673612  730714 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1226 22:08:21.675473  730714 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-324559" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 26 22:14:41 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:41.248815968Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=bc05a441-8cdb-4fec-9ba8-c5c1b4288454 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:41 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:41.248971328Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=a3e833fe-b28c-4609-80a2-b0db107b54d8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:41 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:41.249041513Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=bc05a441-8cdb-4fec-9ba8-c5c1b4288454 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:48 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:48.248376511Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=09b39677-948d-4591-952b-8f8540b5d0a9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:54 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:54.248403760Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a8e43443-3ecc-4f3e-96d5-1327010bff1b name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:54 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:54.248709599Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=a8e43443-3ecc-4f3e-96d5-1327010bff1b name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:54 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:54.249290582Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=bbe81e9a-462f-4049-aba4-7dfc303086f0 name=/runtime.v1alpha2.ImageService/PullImage
	Dec 26 22:14:54 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:54.251330995Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:14:55 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:55.248854685Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=efe075ec-6b68-4728-a7ce-82f83f1196b8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:14:55 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:14:55.249129690Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=efe075ec-6b68-4728-a7ce-82f83f1196b8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:00 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:00.248772670Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=d96e3643-2089-4dc8-9b2d-86635d556f9a name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:08 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:08.248385884Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=125026b4-b859-4809-9703-9969766dbde4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:08 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:08.248704334Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=125026b4-b859-4809-9703-9969766dbde4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:11 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:11.248257767Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=1c079d39-19a2-4e1c-98ce-01914dc2f0ae name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:20 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:20.248373014Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=e8a98adb-f664-4b4e-9b01-093052ecccdc name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:20 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:20.248746716Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=e8a98adb-f664-4b4e-9b01-093052ecccdc name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:23 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:23.248403055Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=161cd400-4ed1-48b1-8877-78e76a11d885 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:35 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:35.248548847Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=91e1a55a-deac-4d5f-a3c3-7a91dc552586 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:35 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:35.248819889Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=91e1a55a-deac-4d5f-a3c3-7a91dc552586 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:36 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:36.248513556Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=172db6b5-5cc2-4d30-80e7-d55d49155a7e name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:38 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:38.735953318Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=4ca69d9b-2c90-4c8d-908c-807300098eea name=/runtime.v1alpha2.ImageService/PullImage
	Dec 26 22:15:38 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:38.737848659Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:15:49 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:49.248323524Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=634e6c13-eec1-4bd0-a8ba-3c55c5d4645b name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:50 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:50.248389022Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=294f7b16-333e-451c-b5f5-607468695527 name=/runtime.v1alpha2.ImageService/ImageStatus
	Dec 26 22:15:50 ingress-addon-legacy-324559 crio[893]: time="2023-12-26 22:15:50.249308328Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=294f7b16-333e-451c-b5f5-607468695527 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0aea528753119       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   7 minutes ago       Running             storage-provisioner       0                   d4374744882ae       storage-provisioner
	02f33002d9479       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  7 minutes ago       Running             coredns                   0                   930284210ecee       coredns-66bff467f8-lsmfr
	dce0f84a81950       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                7 minutes ago       Running             kindnet-cni               0                   1fe757aa81c2b       kindnet-xp2bf
	36c3a5e7fc3b0       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  7 minutes ago       Running             kube-proxy                0                   f5370a56de2f8       kube-proxy-nv5jt
	beac61d1ace3e       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  8 minutes ago       Running             kube-scheduler            0                   f51e7e7a68e9c       kube-scheduler-ingress-addon-legacy-324559
	46e6d02e3c574       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  8 minutes ago       Running             kube-controller-manager   0                   89cbf22e5fbc6       kube-controller-manager-ingress-addon-legacy-324559
	575c4b5034ded       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  8 minutes ago       Running             kube-apiserver            0                   c996e9805171b       kube-apiserver-ingress-addon-legacy-324559
	e9b1d6041f823       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  8 minutes ago       Running             etcd                      0                   a8e498d029d1c       etcd-ingress-addon-legacy-324559
	
	
	==> coredns [02f33002d9479a64055fccd43ef1ca7ab676214fbd5ccf695f09d9e759813c6e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:55624 - 34283 "HINFO IN 5942584428753798869.8126865998415205935. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.084125034s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-324559
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-324559
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=ingress-addon-legacy-324559
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T22_07_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 22:07:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-324559
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 22:15:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 22:13:25 +0000   Tue, 26 Dec 2023 22:07:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 22:13:25 +0000   Tue, 26 Dec 2023 22:07:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 22:13:25 +0000   Tue, 26 Dec 2023 22:07:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 22:13:25 +0000   Tue, 26 Dec 2023 22:08:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-324559
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 874b2f4d11c64fcb850c2458c0352d0d
	  System UUID:                ab7a2e48-7e2b-4a44-bb00-57f5bc9b375d
	  Boot ID:                    f8f887b2-8c20-433d-a967-90e814370f09
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-b8xk7                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-admission-patch-h7nr5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-hlm6t              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m35s
	  kube-system                 coredns-66bff467f8-lsmfr                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     7m50s
	  kube-system                 etcd-ingress-addon-legacy-324559                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 kindnet-xp2bf                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m51s
	  kube-system                 kube-apiserver-ingress-addon-legacy-324559             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-324559    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 kube-ingress-dns-minikube                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-nv5jt                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-scheduler-ingress-addon-legacy-324559             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m16s (x5 over 8m17s)  kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s (x4 over 8m17s)  kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s (x4 over 8m17s)  kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m2s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m2s                   kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m2s                   kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m2s                   kubelet     Node ingress-addon-legacy-324559 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m49s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m42s                  kubelet     Node ingress-addon-legacy-324559 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001236] FS-Cache: O-key=[8] '14613b0000000000'
	[  +0.000818] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001056] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000146f85f7
	[  +0.001167] FS-Cache: N-key=[8] '14613b0000000000'
	[  +0.003514] FS-Cache: Duplicate cookie detected
	[  +0.000807] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001079] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=0000000079163410
	[  +0.001150] FS-Cache: O-key=[8] '14613b0000000000'
	[  +0.000783] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001037] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000f8dfdcd3
	[  +0.001195] FS-Cache: N-key=[8] '14613b0000000000'
	[  +2.993685] FS-Cache: Duplicate cookie detected
	[  +0.000876] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001217] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=00000000ec3309d9
	[  +0.001222] FS-Cache: O-key=[8] '13613b0000000000'
	[  +0.000879] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001151] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000cf0f6968
	[  +0.001228] FS-Cache: N-key=[8] '13613b0000000000'
	[  +0.372532] FS-Cache: Duplicate cookie detected
	[  +0.000898] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001163] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=0000000068094209
	[  +0.001226] FS-Cache: O-key=[8] '19613b0000000000'
	[  +0.000831] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001168] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=0000000030afdcd3
	[  +0.001211] FS-Cache: N-key=[8] '19613b0000000000'
	
	
	==> etcd [e9b1d6041f823f638d3ff0bcb0d2fd195521e835aa1beea773b245095f9bb10a] <==
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/26 22:07:42 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-26 22:07:42.035422 W | auth: simple token is not cryptographically signed
	2023-12-26 22:07:42.040608 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-26 22:07:42.042967 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-26 22:07:42.043235 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-26 22:07:42.043464 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-26 22:07:42.044426 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-26 22:07:42.045561 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/26 22:07:42 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/26 22:07:42 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-26 22:07:42.229233 I | etcdserver: published {Name:ingress-addon-legacy-324559 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-26 22:07:42.229508 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-26 22:07:42.230271 I | embed: ready to serve client requests
	2023-12-26 22:07:42.231052 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-26 22:07:42.231238 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-26 22:07:42.231315 I | embed: ready to serve client requests
	2023-12-26 22:07:42.234416 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-26 22:07:42.287658 I | embed: serving client requests on 192.168.49.2:2379
	
	
	==> kernel <==
	 22:15:57 up  5:58,  0 users,  load average: 0.19, 0.41, 0.81
	Linux ingress-addon-legacy-324559 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [dce0f84a819509c98f957f9b06142244dd890242592ef8778d73ac98742e2356] <==
	I1226 22:13:50.239678       1 main.go:227] handling current node
	I1226 22:14:00.249029       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:14:00.249383       1 main.go:227] handling current node
	I1226 22:14:10.253023       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:14:10.253056       1 main.go:227] handling current node
	I1226 22:14:20.262996       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:14:20.263025       1 main.go:227] handling current node
	I1226 22:14:30.271607       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:14:30.271646       1 main.go:227] handling current node
	I1226 22:14:40.275357       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:14:40.275389       1 main.go:227] handling current node
	I1226 22:14:50.284924       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:14:50.284951       1 main.go:227] handling current node
	I1226 22:15:00.314247       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:15:00.314382       1 main.go:227] handling current node
	I1226 22:15:10.317914       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:15:10.317944       1 main.go:227] handling current node
	I1226 22:15:20.325671       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:15:20.325703       1 main.go:227] handling current node
	I1226 22:15:30.335531       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:15:30.335559       1 main.go:227] handling current node
	I1226 22:15:40.339607       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:15:40.339637       1 main.go:227] handling current node
	I1226 22:15:50.342930       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1226 22:15:50.342960       1 main.go:227] handling current node
	
	
	==> kube-apiserver [575c4b5034ded1ed2f54ae4bccbe637a9d78408e528f471d7105f50193c84be5] <==
	I1226 22:07:48.783238       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I1226 22:07:48.783288       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1226 22:07:48.808814       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1226 22:07:48.904272       1 cache.go:39] Caches are synced for autoregister controller
	I1226 22:07:48.904420       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1226 22:07:48.904488       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1226 22:07:48.904549       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1226 22:07:48.904602       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1226 22:07:49.691753       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1226 22:07:49.691780       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1226 22:07:49.697601       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1226 22:07:49.702169       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1226 22:07:49.702194       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1226 22:07:50.119029       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1226 22:07:50.174696       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1226 22:07:50.297300       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1226 22:07:50.298253       1 controller.go:609] quota admission added evaluator for: endpoints
	I1226 22:07:50.303645       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1226 22:07:51.162779       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1226 22:07:51.817954       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1226 22:07:51.905171       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1226 22:07:55.220193       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1226 22:08:06.541582       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1226 22:08:07.227594       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1226 22:08:22.862751       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	
	==> kube-controller-manager [46e6d02e3c5745545bfd24ad3504b526b69d1d83ce3073bf30c82b94071ba620] <==
	E1226 22:08:06.748324       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1226 22:08:06.765593       1 shared_informer.go:230] Caches are synced for attach detach 
	E1226 22:08:06.776295       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1226 22:08:06.959177       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1226 22:08:07.089344       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1226 22:08:07.125121       1 shared_informer.go:230] Caches are synced for resource quota 
	I1226 22:08:07.162551       1 shared_informer.go:230] Caches are synced for disruption 
	I1226 22:08:07.162579       1 disruption.go:339] Sending events to api server.
	I1226 22:08:07.162891       1 shared_informer.go:230] Caches are synced for endpoint 
	I1226 22:08:07.189243       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1226 22:08:07.189267       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1226 22:08:07.210763       1 shared_informer.go:230] Caches are synced for resource quota 
	I1226 22:08:07.211854       1 shared_informer.go:230] Caches are synced for deployment 
	I1226 22:08:07.217267       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I1226 22:08:07.222356       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1226 22:08:07.263900       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3d8984af-e9bb-4c8d-8948-0243aba9c518", APIVersion:"apps/v1", ResourceVersion:"200", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I1226 22:08:07.388063       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6b755a6f-abd1-454c-bf87-f73bb4df476a", APIVersion:"apps/v1", ResourceVersion:"353", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-dzd9l
	I1226 22:08:07.429603       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6b755a6f-abd1-454c-bf87-f73bb4df476a", APIVersion:"apps/v1", ResourceVersion:"353", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-lsmfr
	I1226 22:08:07.457881       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3d8984af-e9bb-4c8d-8948-0243aba9c518", APIVersion:"apps/v1", ResourceVersion:"352", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1226 22:08:07.993623       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6b755a6f-abd1-454c-bf87-f73bb4df476a", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-dzd9l
	I1226 22:08:16.566447       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1226 22:08:22.837420       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"957360b2-8fd2-4ffd-8700-fd88a36908c0", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1226 22:08:22.846464       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"01577136-9062-440e-88c5-4ad3b321118d", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-hlm6t
	I1226 22:08:22.889893       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"704b55dc-5d95-4810-a089-9c17048692f5", APIVersion:"batch/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-b8xk7
	I1226 22:08:22.921853       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9c82089e-e2ea-4ae1-b979-7023f696cf94", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-h7nr5
	
	
	==> kube-proxy [36c3a5e7fc3b0b3e62d89ddc70be43b8929f62a2440886ceded856e6e6596020] <==
	W1226 22:08:08.545246       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1226 22:08:08.588827       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1226 22:08:08.588928       1 server_others.go:186] Using iptables Proxier.
	I1226 22:08:08.589270       1 server.go:583] Version: v1.18.20
	I1226 22:08:08.590264       1 config.go:315] Starting service config controller
	I1226 22:08:08.590359       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1226 22:08:08.590505       1 config.go:133] Starting endpoints config controller
	I1226 22:08:08.590573       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1226 22:08:08.704609       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1226 22:08:08.704726       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [beac61d1ace3e9fcddf8defb7ffd81bc410cf8d57adc9293474065e9908c9ed9] <==
	I1226 22:07:48.901897       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1226 22:07:48.904202       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 22:07:48.904371       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1226 22:07:48.907823       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1226 22:07:48.907956       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1226 22:07:48.914836       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 22:07:48.914889       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1226 22:07:48.914961       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1226 22:07:48.915023       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 22:07:48.915093       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1226 22:07:48.915210       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:07:48.915657       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 22:07:48.915730       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1226 22:07:48.915794       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1226 22:07:48.915863       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 22:07:48.915922       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1226 22:07:48.919002       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1226 22:07:49.730841       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1226 22:07:49.772305       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:07:49.835906       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 22:07:49.923621       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1226 22:07:51.706906       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1226 22:08:07.440166       1 factory.go:503] pod: kube-system/coredns-66bff467f8-dzd9l is already present in unschedulable queue
	E1226 22:08:07.728987       1 factory.go:503] pod: kube-system/coredns-66bff467f8-lsmfr is already present in unschedulable queue
	E1226 22:08:08.428739       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	
	==> kubelet <==
	Dec 26 22:15:00 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:00.249440    1610 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:00 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:00.249482    1610 pod_workers.go:191] Error syncing pod 96c39005-b696-4b74-a209-d6437a252912 ("kube-ingress-dns-minikube_kube-system(96c39005-b696-4b74-a209-d6437a252912)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 26 22:15:08 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:08.248928    1610 pod_workers.go:191] Error syncing pod 37921367-230b-4ba2-b651-69e165130c2f ("ingress-nginx-admission-patch-h7nr5_ingress-nginx(37921367-230b-4ba2-b651-69e165130c2f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:15:11 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:11.248917    1610 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:11 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:11.248962    1610 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:11 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:11.249012    1610 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:11 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:11.249046    1610 pod_workers.go:191] Error syncing pod 96c39005-b696-4b74-a209-d6437a252912 ("kube-ingress-dns-minikube_kube-system(96c39005-b696-4b74-a209-d6437a252912)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 26 22:15:20 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:20.249028    1610 pod_workers.go:191] Error syncing pod 37921367-230b-4ba2-b651-69e165130c2f ("ingress-nginx-admission-patch-h7nr5_ingress-nginx(37921367-230b-4ba2-b651-69e165130c2f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Dec 26 22:15:23 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:23.248971    1610 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:23 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:23.249012    1610 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:23 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:23.249052    1610 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:23 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:23.249085    1610 pod_workers.go:191] Error syncing pod 96c39005-b696-4b74-a209-d6437a252912 ("kube-ingress-dns-minikube_kube-system(96c39005-b696-4b74-a209-d6437a252912)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 26 22:15:36 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:36.248918    1610 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:36 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:36.248972    1610 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:36 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:36.249036    1610 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:36 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:36.249086    1610 pod_workers.go:191] Error syncing pod 96c39005-b696-4b74-a209-d6437a252912 ("kube-ingress-dns-minikube_kube-system(96c39005-b696-4b74-a209-d6437a252912)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 26 22:15:38 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:38.735244    1610 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:15:38 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:38.735312    1610 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:15:38 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:38.735480    1610 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Dec 26 22:15:38 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:38.735515    1610 pod_workers.go:191] Error syncing pod a6cf9ebb-384e-437d-8920-aec4d8b9acd0 ("ingress-nginx-admission-create-b8xk7_ingress-nginx(a6cf9ebb-384e-437d-8920-aec4d8b9acd0)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Dec 26 22:15:49 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:49.248926    1610 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:49 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:49.248982    1610 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:49 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:49.249029    1610 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 26 22:15:49 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:49.249063    1610 pod_workers.go:191] Error syncing pod 96c39005-b696-4b74-a209-d6437a252912 ("kube-ingress-dns-minikube_kube-system(96c39005-b696-4b74-a209-d6437a252912)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 26 22:15:50 ingress-addon-legacy-324559 kubelet[1610]: E1226 22:15:50.249545    1610 pod_workers.go:191] Error syncing pod a6cf9ebb-384e-437d-8920-aec4d8b9acd0 ("ingress-nginx-admission-create-b8xk7_ingress-nginx(a6cf9ebb-384e-437d-8920-aec4d8b9acd0)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	
	==> storage-provisioner [0aea528753119bda3cba01c417a6b9f286728e611aeb989175d4f1a81b799666] <==
	I1226 22:08:22.716387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1226 22:08:22.740244       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1226 22:08:22.740561       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1226 22:08:22.750546       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1226 22:08:22.750725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-324559_6f1b9ef8-eaf0-4794-970e-5e7767c735b3!
	I1226 22:08:22.751293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb789ba4-9e0f-40c3-82a2-46b1717003f4", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-324559_6f1b9ef8-eaf0-4794-970e-5e7767c735b3 became leader
	I1226 22:08:22.852250       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-324559_6f1b9ef8-eaf0-4794-970e-5e7767c735b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-324559 -n ingress-addon-legacy-324559
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-324559 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-b8xk7 ingress-nginx-admission-patch-h7nr5 ingress-nginx-controller-7fcf777cb7-hlm6t kube-ingress-dns-minikube
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-324559 describe pod ingress-nginx-admission-create-b8xk7 ingress-nginx-admission-patch-h7nr5 ingress-nginx-controller-7fcf777cb7-hlm6t kube-ingress-dns-minikube
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-324559 describe pod ingress-nginx-admission-create-b8xk7 ingress-nginx-admission-patch-h7nr5 ingress-nginx-controller-7fcf777cb7-hlm6t kube-ingress-dns-minikube: exit status 1 (87.793231ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b8xk7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h7nr5" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-hlm6t" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-324559 describe pod ingress-nginx-admission-create-b8xk7 ingress-nginx-admission-patch-h7nr5 ingress-nginx-controller-7fcf777cb7-hlm6t kube-ingress-dns-minikube: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (92.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-ls5rz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-ls5rz -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-ls5rz -- sh -c "ping -c 1 192.168.58.1": exit status 1 (229.077744ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-ls5rz): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-sffk7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-sffk7 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-sffk7 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (225.040315ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-sffk7): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-772557
helpers_test.go:235: (dbg) docker inspect multinode-772557:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430",
	        "Created": "2023-12-26T22:22:27.358761063Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 766512,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T22:22:27.716968804Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3bfff26a1ae256fcdf8f10a333efdefbe26edc5c1669e1cc5c973c016e44d3c4",
	        "ResolvConfPath": "/var/lib/docker/containers/ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430/hosts",
	        "LogPath": "/var/lib/docker/containers/ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430/ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430-json.log",
	        "Name": "/multinode-772557",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-772557:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-772557",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/17e557bae9e9a57bbcb838b48093620d988c62c8ad28c391874ef4304fed9a90-init/diff:/var/lib/docker/overlay2/45396a29879cab7c8a67d68e40c59b67c1c0ba964e9ed87a152af8cc5862c477/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17e557bae9e9a57bbcb838b48093620d988c62c8ad28c391874ef4304fed9a90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17e557bae9e9a57bbcb838b48093620d988c62c8ad28c391874ef4304fed9a90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17e557bae9e9a57bbcb838b48093620d988c62c8ad28c391874ef4304fed9a90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-772557",
	                "Source": "/var/lib/docker/volumes/multinode-772557/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-772557",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-772557",
	                "name.minikube.sigs.k8s.io": "multinode-772557",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6100d3784d463287fffe97509c23b7d5d39c3cdbaed06707fe4a200a02d799ec",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33746"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33745"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33742"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33744"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33743"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6100d3784d46",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-772557": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ed1900d23c88",
	                        "multinode-772557"
	                    ],
	                    "NetworkID": "cb22699b10d70265364292fa6c3a53cc4067be5b1e8f6df428040b94178e8bde",
	                    "EndpointID": "b8a71732d1b05f224b45963788b9ffb6c42e7d08d3d322f92cbfe41a6a30f4f3",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-772557 -n multinode-772557
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-772557 logs -n 25: (1.721898528s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-497716                           | mount-start-2-497716 | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:22 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-497716 ssh -- ls                    | mount-start-2-497716 | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:22 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-495493                           | mount-start-1-495493 | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:22 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-497716 ssh -- ls                    | mount-start-2-497716 | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:22 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-497716                           | mount-start-2-497716 | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:22 UTC |
	| start   | -p mount-start-2-497716                           | mount-start-2-497716 | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:22 UTC |
	| ssh     | mount-start-2-497716 ssh -- ls                    | mount-start-2-497716 | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:22 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-497716                           | mount-start-2-497716 | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:22 UTC |
	| delete  | -p mount-start-1-495493                           | mount-start-1-495493 | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:22 UTC |
	| start   | -p multinode-772557                               | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:22 UTC | 26 Dec 23 22:24 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- apply -f                   | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- rollout                    | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- get pods -o                | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- get pods -o                | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | busybox-5bc68d56bd-ls5rz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | busybox-5bc68d56bd-sffk7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | busybox-5bc68d56bd-ls5rz --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | busybox-5bc68d56bd-sffk7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | busybox-5bc68d56bd-ls5rz -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | busybox-5bc68d56bd-sffk7 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- get pods -o                | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | busybox-5bc68d56bd-ls5rz                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC |                     |
	|         | busybox-5bc68d56bd-ls5rz -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC | 26 Dec 23 22:24 UTC |
	|         | busybox-5bc68d56bd-sffk7                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-772557 -- exec                       | multinode-772557     | jenkins | v1.32.0 | 26 Dec 23 22:24 UTC |                     |
	|         | busybox-5bc68d56bd-sffk7 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 22:22:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 22:22:21.981511  766058 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:22:21.981731  766058 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:22:21.981744  766058 out.go:309] Setting ErrFile to fd 2...
	I1226 22:22:21.981754  766058 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:22:21.982052  766058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:22:21.982574  766058 out.go:303] Setting JSON to false
	I1226 22:22:21.983507  766058 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":21876,"bootTime":1703607466,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:22:21.983594  766058 start.go:138] virtualization:  
	I1226 22:22:21.987860  766058 out.go:177] * [multinode-772557] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 22:22:21.989505  766058 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:22:21.991178  766058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:22:21.989641  766058 notify.go:220] Checking for updates...
	I1226 22:22:21.994822  766058 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:22:21.996817  766058 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:22:21.998987  766058 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:22:22.005847  766058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:22:22.008609  766058 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:22:22.033020  766058 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:22:22.033142  766058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:22:22.119322  766058 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-26 22:22:22.109764824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:22:22.119427  766058 docker.go:295] overlay module found
	I1226 22:22:22.122798  766058 out.go:177] * Using the docker driver based on user configuration
	I1226 22:22:22.124738  766058 start.go:298] selected driver: docker
	I1226 22:22:22.124756  766058 start.go:902] validating driver "docker" against <nil>
	I1226 22:22:22.124775  766058 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:22:22.125417  766058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:22:22.197860  766058 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-26 22:22:22.188776571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:22:22.198017  766058 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 22:22:22.198256  766058 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1226 22:22:22.200233  766058 out.go:177] * Using Docker driver with root privileges
	I1226 22:22:22.202039  766058 cni.go:84] Creating CNI manager for ""
	I1226 22:22:22.202063  766058 cni.go:136] 0 nodes found, recommending kindnet
	I1226 22:22:22.202075  766058 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 22:22:22.202087  766058 start_flags.go:323] config:
	{Name:multinode-772557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-772557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:22:22.205554  766058 out.go:177] * Starting control plane node multinode-772557 in cluster multinode-772557
	I1226 22:22:22.207487  766058 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:22:22.209211  766058 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:22:22.211289  766058 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:22:22.211346  766058 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1226 22:22:22.211355  766058 cache.go:56] Caching tarball of preloaded images
	I1226 22:22:22.211373  766058 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 22:22:22.211434  766058 preload.go:174] Found /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1226 22:22:22.211444  766058 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1226 22:22:22.211788  766058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/config.json ...
	I1226 22:22:22.211808  766058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/config.json: {Name:mk45720670172633a4834991c8ac77ed169f9e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:22:22.228613  766058 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 22:22:22.228649  766058 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 22:22:22.228671  766058 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:22:22.228732  766058 start.go:365] acquiring machines lock for multinode-772557: {Name:mk17257ee27eab8d8f055167b79222337cdda245 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:22:22.228856  766058 start.go:369] acquired machines lock for "multinode-772557" in 102.881µs
	I1226 22:22:22.228887  766058 start.go:93] Provisioning new machine with config: &{Name:multinode-772557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-772557 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 22:22:22.228968  766058 start.go:125] createHost starting for "" (driver="docker")
	I1226 22:22:22.231172  766058 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 22:22:22.231416  766058 start.go:159] libmachine.API.Create for "multinode-772557" (driver="docker")
	I1226 22:22:22.231467  766058 client.go:168] LocalClient.Create starting
	I1226 22:22:22.231555  766058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem
	I1226 22:22:22.231597  766058 main.go:141] libmachine: Decoding PEM data...
	I1226 22:22:22.231616  766058 main.go:141] libmachine: Parsing certificate...
	I1226 22:22:22.231668  766058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem
	I1226 22:22:22.231694  766058 main.go:141] libmachine: Decoding PEM data...
	I1226 22:22:22.231711  766058 main.go:141] libmachine: Parsing certificate...
	I1226 22:22:22.232079  766058 cli_runner.go:164] Run: docker network inspect multinode-772557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 22:22:22.248885  766058 cli_runner.go:211] docker network inspect multinode-772557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 22:22:22.248981  766058 network_create.go:281] running [docker network inspect multinode-772557] to gather additional debugging logs...
	I1226 22:22:22.249003  766058 cli_runner.go:164] Run: docker network inspect multinode-772557
	W1226 22:22:22.265398  766058 cli_runner.go:211] docker network inspect multinode-772557 returned with exit code 1
	I1226 22:22:22.265427  766058 network_create.go:284] error running [docker network inspect multinode-772557]: docker network inspect multinode-772557: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-772557 not found
	I1226 22:22:22.265440  766058 network_create.go:286] output of [docker network inspect multinode-772557]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-772557 not found
	
	** /stderr **
	I1226 22:22:22.265548  766058 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:22:22.282908  766058 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0b2e7e17d50 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:7d:93:49:57} reservation:<nil>}
	I1226 22:22:22.283267  766058 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002532f20}
	I1226 22:22:22.283290  766058 network_create.go:124] attempt to create docker network multinode-772557 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1226 22:22:22.283352  766058 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-772557 multinode-772557
	I1226 22:22:22.343146  766058 network_create.go:108] docker network multinode-772557 192.168.58.0/24 created
	I1226 22:22:22.343176  766058 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-772557" container
	I1226 22:22:22.343250  766058 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 22:22:22.359479  766058 cli_runner.go:164] Run: docker volume create multinode-772557 --label name.minikube.sigs.k8s.io=multinode-772557 --label created_by.minikube.sigs.k8s.io=true
	I1226 22:22:22.377874  766058 oci.go:103] Successfully created a docker volume multinode-772557
	I1226 22:22:22.377968  766058 cli_runner.go:164] Run: docker run --rm --name multinode-772557-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-772557 --entrypoint /usr/bin/test -v multinode-772557:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 22:22:22.936608  766058 oci.go:107] Successfully prepared a docker volume multinode-772557
	I1226 22:22:22.936667  766058 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:22:22.936687  766058 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 22:22:22.936785  766058 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-772557:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 22:22:27.276763  766058 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-772557:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.339937229s)
	I1226 22:22:27.276797  766058 kic.go:203] duration metric: took 4.340103 seconds to extract preloaded images to volume
	W1226 22:22:27.276932  766058 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 22:22:27.277053  766058 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 22:22:27.342552  766058 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-772557 --name multinode-772557 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-772557 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-772557 --network multinode-772557 --ip 192.168.58.2 --volume multinode-772557:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 22:22:27.725695  766058 cli_runner.go:164] Run: docker container inspect multinode-772557 --format={{.State.Running}}
	I1226 22:22:27.749688  766058 cli_runner.go:164] Run: docker container inspect multinode-772557 --format={{.State.Status}}
	I1226 22:22:27.771069  766058 cli_runner.go:164] Run: docker exec multinode-772557 stat /var/lib/dpkg/alternatives/iptables
	I1226 22:22:27.839688  766058 oci.go:144] the created container "multinode-772557" has a running status.
	I1226 22:22:27.839719  766058 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa...
	I1226 22:22:28.066340  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1226 22:22:28.066390  766058 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 22:22:28.096942  766058 cli_runner.go:164] Run: docker container inspect multinode-772557 --format={{.State.Status}}
	I1226 22:22:28.126528  766058 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 22:22:28.126551  766058 kic_runner.go:114] Args: [docker exec --privileged multinode-772557 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 22:22:28.220440  766058 cli_runner.go:164] Run: docker container inspect multinode-772557 --format={{.State.Status}}
	I1226 22:22:28.252201  766058 machine.go:88] provisioning docker machine ...
	I1226 22:22:28.252245  766058 ubuntu.go:169] provisioning hostname "multinode-772557"
	I1226 22:22:28.252328  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:22:28.289174  766058 main.go:141] libmachine: Using SSH client type: native
	I1226 22:22:28.289640  766058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33746 <nil> <nil>}
	I1226 22:22:28.289661  766058 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-772557 && echo "multinode-772557" | sudo tee /etc/hostname
	I1226 22:22:28.290470  766058 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37024->127.0.0.1:33746: read: connection reset by peer
	I1226 22:22:31.443293  766058 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-772557
	
	I1226 22:22:31.443379  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:22:31.461985  766058 main.go:141] libmachine: Using SSH client type: native
	I1226 22:22:31.462401  766058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33746 <nil> <nil>}
	I1226 22:22:31.462424  766058 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-772557' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-772557/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-772557' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:22:31.601989  766058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:22:31.602018  766058 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 22:22:31.602047  766058 ubuntu.go:177] setting up certificates
	I1226 22:22:31.602062  766058 provision.go:83] configureAuth start
	I1226 22:22:31.602148  766058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-772557
	I1226 22:22:31.620915  766058 provision.go:138] copyHostCerts
	I1226 22:22:31.620959  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:22:31.620993  766058 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem, removing ...
	I1226 22:22:31.621006  766058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:22:31.621086  766058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 22:22:31.621174  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:22:31.621197  766058 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem, removing ...
	I1226 22:22:31.621205  766058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:22:31.621236  766058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 22:22:31.621281  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:22:31.621305  766058 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem, removing ...
	I1226 22:22:31.621313  766058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:22:31.621340  766058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 22:22:31.621393  766058 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.multinode-772557 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-772557]
	I1226 22:22:31.922019  766058 provision.go:172] copyRemoteCerts
	I1226 22:22:31.922091  766058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:22:31.922136  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:22:31.940079  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33746 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa Username:docker}
	I1226 22:22:32.039299  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1226 22:22:32.039362  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:22:32.069055  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1226 22:22:32.069167  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1226 22:22:32.098118  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1226 22:22:32.098181  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:22:32.127549  766058 provision.go:86] duration metric: configureAuth took 525.472547ms
	I1226 22:22:32.127580  766058 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:22:32.127774  766058 config.go:182] Loaded profile config "multinode-772557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:22:32.127879  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:22:32.146074  766058 main.go:141] libmachine: Using SSH client type: native
	I1226 22:22:32.146518  766058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33746 <nil> <nil>}
	I1226 22:22:32.146539  766058 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:22:32.396553  766058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:22:32.396576  766058 machine.go:91] provisioned docker machine in 4.14434202s
	I1226 22:22:32.396585  766058 client.go:171] LocalClient.Create took 10.165108799s
	I1226 22:22:32.396614  766058 start.go:167] duration metric: libmachine.API.Create for "multinode-772557" took 10.165182323s
	I1226 22:22:32.396632  766058 start.go:300] post-start starting for "multinode-772557" (driver="docker")
	I1226 22:22:32.396642  766058 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:22:32.396737  766058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:22:32.396795  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:22:32.415339  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33746 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa Username:docker}
	I1226 22:22:32.515433  766058 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:22:32.519481  766058 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1226 22:22:32.519499  766058 command_runner.go:130] > NAME="Ubuntu"
	I1226 22:22:32.519507  766058 command_runner.go:130] > VERSION_ID="22.04"
	I1226 22:22:32.519513  766058 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1226 22:22:32.519519  766058 command_runner.go:130] > VERSION_CODENAME=jammy
	I1226 22:22:32.519524  766058 command_runner.go:130] > ID=ubuntu
	I1226 22:22:32.519529  766058 command_runner.go:130] > ID_LIKE=debian
	I1226 22:22:32.519534  766058 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1226 22:22:32.519542  766058 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1226 22:22:32.519550  766058 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1226 22:22:32.519559  766058 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1226 22:22:32.519564  766058 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1226 22:22:32.519681  766058 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:22:32.519707  766058 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:22:32.519717  766058 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:22:32.519724  766058 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 22:22:32.519734  766058 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 22:22:32.519789  766058 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 22:22:32.519876  766058 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> 7030362.pem in /etc/ssl/certs
	I1226 22:22:32.519883  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> /etc/ssl/certs/7030362.pem
	I1226 22:22:32.519981  766058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:22:32.529958  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:22:32.559346  766058 start.go:303] post-start completed in 162.699923ms
	I1226 22:22:32.559702  766058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-772557
	I1226 22:22:32.577141  766058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/config.json ...
	I1226 22:22:32.577418  766058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:22:32.577468  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:22:32.598954  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33746 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa Username:docker}
	I1226 22:22:32.694480  766058 command_runner.go:130] > 12%!
	(MISSING)I1226 22:22:32.694619  766058 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:22:32.700694  766058 command_runner.go:130] > 172G
	I1226 22:22:32.700725  766058 start.go:128] duration metric: createHost completed in 10.471746574s
	I1226 22:22:32.700734  766058 start.go:83] releasing machines lock for "multinode-772557", held for 10.471864364s
	I1226 22:22:32.700820  766058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-772557
	I1226 22:22:32.718163  766058 ssh_runner.go:195] Run: cat /version.json
	I1226 22:22:32.718219  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:22:32.718435  766058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:22:32.718494  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:22:32.737219  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33746 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa Username:docker}
	I1226 22:22:32.737451  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33746 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa Username:docker}
	I1226 22:22:32.963808  766058 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 22:22:32.966914  766058 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1703498848-17857", "minikube_version": "v1.32.0", "commit": "d18dc8d014b22564d2860ddb02a821a21df70433"}
	I1226 22:22:32.967121  766058 ssh_runner.go:195] Run: systemctl --version
	I1226 22:22:32.972185  766058 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1226 22:22:32.972267  766058 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1226 22:22:32.972642  766058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:22:33.121866  766058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:22:33.127407  766058 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1226 22:22:33.127434  766058 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1226 22:22:33.127442  766058 command_runner.go:130] > Device: 36h/54d	Inode: 1302392     Links: 1
	I1226 22:22:33.127450  766058 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:22:33.127457  766058 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1226 22:22:33.127463  766058 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1226 22:22:33.127486  766058 command_runner.go:130] > Change: 2023-12-26 21:45:18.403362393 +0000
	I1226 22:22:33.127497  766058 command_runner.go:130] >  Birth: 2023-12-26 21:45:18.403362393 +0000
	I1226 22:22:33.127737  766058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:22:33.151929  766058 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:22:33.152061  766058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:22:33.193362  766058 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1226 22:22:33.193406  766058 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 22:22:33.193414  766058 start.go:475] detecting cgroup driver to use...
	I1226 22:22:33.193444  766058 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:22:33.193497  766058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:22:33.212477  766058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:22:33.226279  766058 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:22:33.226393  766058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:22:33.242564  766058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:22:33.259490  766058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 22:22:33.364093  766058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:22:33.471283  766058 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1226 22:22:33.471314  766058 docker.go:219] disabling docker service ...
	I1226 22:22:33.471367  766058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:22:33.493450  766058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:22:33.507114  766058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:22:33.615290  766058 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1226 22:22:33.615412  766058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:22:33.729339  766058 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1226 22:22:33.729477  766058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:22:33.745323  766058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:22:33.766472  766058 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1226 22:22:33.768031  766058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1226 22:22:33.768129  766058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:22:33.780670  766058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 22:22:33.780769  766058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:22:33.793574  766058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:22:33.805493  766058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:22:33.818292  766058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 22:22:33.829284  766058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 22:22:33.838354  766058 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 22:22:33.839574  766058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 22:22:33.849858  766058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:22:33.949142  766058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 22:22:34.060613  766058 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 22:22:34.060685  766058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 22:22:34.065984  766058 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1226 22:22:34.066010  766058 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 22:22:34.066018  766058 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1226 22:22:34.066027  766058 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:22:34.066033  766058 command_runner.go:130] > Access: 2023-12-26 22:22:34.044012700 +0000
	I1226 22:22:34.066042  766058 command_runner.go:130] > Modify: 2023-12-26 22:22:34.044012700 +0000
	I1226 22:22:34.066048  766058 command_runner.go:130] > Change: 2023-12-26 22:22:34.044012700 +0000
	I1226 22:22:34.066054  766058 command_runner.go:130] >  Birth: -
	I1226 22:22:34.066073  766058 start.go:543] Will wait 60s for crictl version
	I1226 22:22:34.066132  766058 ssh_runner.go:195] Run: which crictl
	I1226 22:22:34.071074  766058 command_runner.go:130] > /usr/bin/crictl
	I1226 22:22:34.071363  766058 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 22:22:34.112698  766058 command_runner.go:130] > Version:  0.1.0
	I1226 22:22:34.112723  766058 command_runner.go:130] > RuntimeName:  cri-o
	I1226 22:22:34.112730  766058 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1226 22:22:34.112737  766058 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 22:22:34.115161  766058 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 22:22:34.115251  766058 ssh_runner.go:195] Run: crio --version
	I1226 22:22:34.159722  766058 command_runner.go:130] > crio version 1.24.6
	I1226 22:22:34.159790  766058 command_runner.go:130] > Version:          1.24.6
	I1226 22:22:34.159829  766058 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1226 22:22:34.159863  766058 command_runner.go:130] > GitTreeState:     clean
	I1226 22:22:34.159888  766058 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1226 22:22:34.159913  766058 command_runner.go:130] > GoVersion:        go1.18.2
	I1226 22:22:34.159954  766058 command_runner.go:130] > Compiler:         gc
	I1226 22:22:34.159979  766058 command_runner.go:130] > Platform:         linux/arm64
	I1226 22:22:34.160027  766058 command_runner.go:130] > Linkmode:         dynamic
	I1226 22:22:34.160056  766058 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1226 22:22:34.160076  766058 command_runner.go:130] > SeccompEnabled:   true
	I1226 22:22:34.160106  766058 command_runner.go:130] > AppArmorEnabled:  false
	I1226 22:22:34.161255  766058 ssh_runner.go:195] Run: crio --version
	I1226 22:22:34.208760  766058 command_runner.go:130] > crio version 1.24.6
	I1226 22:22:34.208820  766058 command_runner.go:130] > Version:          1.24.6
	I1226 22:22:34.208856  766058 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1226 22:22:34.208881  766058 command_runner.go:130] > GitTreeState:     clean
	I1226 22:22:34.208903  766058 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1226 22:22:34.208938  766058 command_runner.go:130] > GoVersion:        go1.18.2
	I1226 22:22:34.208964  766058 command_runner.go:130] > Compiler:         gc
	I1226 22:22:34.208989  766058 command_runner.go:130] > Platform:         linux/arm64
	I1226 22:22:34.209022  766058 command_runner.go:130] > Linkmode:         dynamic
	I1226 22:22:34.209052  766058 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1226 22:22:34.209075  766058 command_runner.go:130] > SeccompEnabled:   true
	I1226 22:22:34.209106  766058 command_runner.go:130] > AppArmorEnabled:  false
	I1226 22:22:34.213124  766058 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1226 22:22:34.215106  766058 cli_runner.go:164] Run: docker network inspect multinode-772557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:22:34.232435  766058 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1226 22:22:34.236995  766058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:22:34.250643  766058 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:22:34.250722  766058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:22:34.320111  766058 command_runner.go:130] > {
	I1226 22:22:34.320134  766058 command_runner.go:130] >   "images": [
	I1226 22:22:34.320140  766058 command_runner.go:130] >     {
	I1226 22:22:34.320151  766058 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1226 22:22:34.320159  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.320167  766058 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1226 22:22:34.320172  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320179  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.320194  766058 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1226 22:22:34.320207  766058 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1226 22:22:34.320212  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320217  766058 command_runner.go:130] >       "size": "60867618",
	I1226 22:22:34.320224  766058 command_runner.go:130] >       "uid": null,
	I1226 22:22:34.320229  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.320246  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.320251  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.320258  766058 command_runner.go:130] >     },
	I1226 22:22:34.320263  766058 command_runner.go:130] >     {
	I1226 22:22:34.320271  766058 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1226 22:22:34.320276  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.320285  766058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1226 22:22:34.320292  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320303  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.320314  766058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1226 22:22:34.320327  766058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1226 22:22:34.320333  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320343  766058 command_runner.go:130] >       "size": "29037500",
	I1226 22:22:34.320348  766058 command_runner.go:130] >       "uid": null,
	I1226 22:22:34.320354  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.320359  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.320366  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.320374  766058 command_runner.go:130] >     },
	I1226 22:22:34.320378  766058 command_runner.go:130] >     {
	I1226 22:22:34.320388  766058 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1226 22:22:34.320393  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.320401  766058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1226 22:22:34.320408  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320413  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.320423  766058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1226 22:22:34.320436  766058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1226 22:22:34.320441  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320447  766058 command_runner.go:130] >       "size": "51393451",
	I1226 22:22:34.320452  766058 command_runner.go:130] >       "uid": null,
	I1226 22:22:34.320459  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.320467  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.320472  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.320478  766058 command_runner.go:130] >     },
	I1226 22:22:34.320483  766058 command_runner.go:130] >     {
	I1226 22:22:34.320491  766058 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1226 22:22:34.320498  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.320504  766058 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1226 22:22:34.320511  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320529  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.320538  766058 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1226 22:22:34.320547  766058 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1226 22:22:34.320557  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320563  766058 command_runner.go:130] >       "size": "182203183",
	I1226 22:22:34.320568  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.320577  766058 command_runner.go:130] >         "value": "0"
	I1226 22:22:34.320582  766058 command_runner.go:130] >       },
	I1226 22:22:34.320587  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.320595  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.320603  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.320607  766058 command_runner.go:130] >     },
	I1226 22:22:34.320611  766058 command_runner.go:130] >     {
	I1226 22:22:34.320619  766058 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1226 22:22:34.320624  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.320630  766058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1226 22:22:34.320634  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320639  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.320649  766058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1226 22:22:34.320658  766058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1226 22:22:34.320662  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320667  766058 command_runner.go:130] >       "size": "121119694",
	I1226 22:22:34.320674  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.320679  766058 command_runner.go:130] >         "value": "0"
	I1226 22:22:34.320685  766058 command_runner.go:130] >       },
	I1226 22:22:34.320690  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.320695  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.320700  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.320705  766058 command_runner.go:130] >     },
	I1226 22:22:34.320711  766058 command_runner.go:130] >     {
	I1226 22:22:34.320720  766058 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1226 22:22:34.320727  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.320734  766058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1226 22:22:34.320739  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320746  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.320756  766058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1226 22:22:34.320769  766058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1226 22:22:34.320774  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320779  766058 command_runner.go:130] >       "size": "117252916",
	I1226 22:22:34.320784  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.320791  766058 command_runner.go:130] >         "value": "0"
	I1226 22:22:34.320798  766058 command_runner.go:130] >       },
	I1226 22:22:34.320805  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.320814  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.320819  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.320823  766058 command_runner.go:130] >     },
	I1226 22:22:34.320830  766058 command_runner.go:130] >     {
	I1226 22:22:34.320837  766058 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1226 22:22:34.320845  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.320851  766058 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1226 22:22:34.320858  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320863  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.320872  766058 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1226 22:22:34.320884  766058 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1226 22:22:34.320889  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320894  766058 command_runner.go:130] >       "size": "69992343",
	I1226 22:22:34.320902  766058 command_runner.go:130] >       "uid": null,
	I1226 22:22:34.320907  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.320925  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.320935  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.320942  766058 command_runner.go:130] >     },
	I1226 22:22:34.320946  766058 command_runner.go:130] >     {
	I1226 22:22:34.320954  766058 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1226 22:22:34.320962  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.320968  766058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1226 22:22:34.320973  766058 command_runner.go:130] >       ],
	I1226 22:22:34.320978  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.321000  766058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1226 22:22:34.321014  766058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1226 22:22:34.321018  766058 command_runner.go:130] >       ],
	I1226 22:22:34.321024  766058 command_runner.go:130] >       "size": "59253556",
	I1226 22:22:34.321031  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.321036  766058 command_runner.go:130] >         "value": "0"
	I1226 22:22:34.321040  766058 command_runner.go:130] >       },
	I1226 22:22:34.321048  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.321055  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.321060  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.321067  766058 command_runner.go:130] >     },
	I1226 22:22:34.321074  766058 command_runner.go:130] >     {
	I1226 22:22:34.321082  766058 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1226 22:22:34.321090  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.321096  766058 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1226 22:22:34.321104  766058 command_runner.go:130] >       ],
	I1226 22:22:34.321109  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.321118  766058 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1226 22:22:34.321127  766058 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1226 22:22:34.321134  766058 command_runner.go:130] >       ],
	I1226 22:22:34.321139  766058 command_runner.go:130] >       "size": "520014",
	I1226 22:22:34.321147  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.321152  766058 command_runner.go:130] >         "value": "65535"
	I1226 22:22:34.321156  766058 command_runner.go:130] >       },
	I1226 22:22:34.321161  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.321169  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.321174  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.321179  766058 command_runner.go:130] >     }
	I1226 22:22:34.321185  766058 command_runner.go:130] >   ]
	I1226 22:22:34.321192  766058 command_runner.go:130] > }
	I1226 22:22:34.323989  766058 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 22:22:34.324013  766058 crio.go:415] Images already preloaded, skipping extraction
	I1226 22:22:34.324066  766058 ssh_runner.go:195] Run: sudo crictl images --output json
	I1226 22:22:34.363357  766058 command_runner.go:130] > {
	I1226 22:22:34.363381  766058 command_runner.go:130] >   "images": [
	I1226 22:22:34.363388  766058 command_runner.go:130] >     {
	I1226 22:22:34.363397  766058 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1226 22:22:34.363404  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.363421  766058 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1226 22:22:34.363426  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363431  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.363447  766058 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1226 22:22:34.363461  766058 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1226 22:22:34.363466  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363471  766058 command_runner.go:130] >       "size": "60867618",
	I1226 22:22:34.363476  766058 command_runner.go:130] >       "uid": null,
	I1226 22:22:34.363481  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.363487  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.363495  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.363500  766058 command_runner.go:130] >     },
	I1226 22:22:34.363510  766058 command_runner.go:130] >     {
	I1226 22:22:34.363518  766058 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1226 22:22:34.363526  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.363533  766058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1226 22:22:34.363538  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363542  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.363552  766058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1226 22:22:34.363562  766058 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1226 22:22:34.363566  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363575  766058 command_runner.go:130] >       "size": "29037500",
	I1226 22:22:34.363580  766058 command_runner.go:130] >       "uid": null,
	I1226 22:22:34.363585  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.363592  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.363597  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.363609  766058 command_runner.go:130] >     },
	I1226 22:22:34.363613  766058 command_runner.go:130] >     {
	I1226 22:22:34.363621  766058 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1226 22:22:34.363630  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.363638  766058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1226 22:22:34.363647  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363654  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.363664  766058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1226 22:22:34.363674  766058 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1226 22:22:34.363681  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363686  766058 command_runner.go:130] >       "size": "51393451",
	I1226 22:22:34.363691  766058 command_runner.go:130] >       "uid": null,
	I1226 22:22:34.363698  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.363703  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.363711  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.363715  766058 command_runner.go:130] >     },
	I1226 22:22:34.363727  766058 command_runner.go:130] >     {
	I1226 22:22:34.363735  766058 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1226 22:22:34.363740  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.363746  766058 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1226 22:22:34.363755  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363761  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.363772  766058 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1226 22:22:34.363784  766058 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1226 22:22:34.363796  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363805  766058 command_runner.go:130] >       "size": "182203183",
	I1226 22:22:34.363810  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.363815  766058 command_runner.go:130] >         "value": "0"
	I1226 22:22:34.363819  766058 command_runner.go:130] >       },
	I1226 22:22:34.363824  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.363832  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.363839  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.363844  766058 command_runner.go:130] >     },
	I1226 22:22:34.363848  766058 command_runner.go:130] >     {
	I1226 22:22:34.363862  766058 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1226 22:22:34.363871  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.363878  766058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1226 22:22:34.363890  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363895  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.363904  766058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1226 22:22:34.363919  766058 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1226 22:22:34.363927  766058 command_runner.go:130] >       ],
	I1226 22:22:34.363932  766058 command_runner.go:130] >       "size": "121119694",
	I1226 22:22:34.363937  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.363942  766058 command_runner.go:130] >         "value": "0"
	I1226 22:22:34.363949  766058 command_runner.go:130] >       },
	I1226 22:22:34.363954  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.363959  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.363966  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.363971  766058 command_runner.go:130] >     },
	I1226 22:22:34.363975  766058 command_runner.go:130] >     {
	I1226 22:22:34.363982  766058 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1226 22:22:34.363987  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.364004  766058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1226 22:22:34.364009  766058 command_runner.go:130] >       ],
	I1226 22:22:34.364014  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.364026  766058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1226 22:22:34.364036  766058 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1226 22:22:34.364047  766058 command_runner.go:130] >       ],
	I1226 22:22:34.364056  766058 command_runner.go:130] >       "size": "117252916",
	I1226 22:22:34.364063  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.364068  766058 command_runner.go:130] >         "value": "0"
	I1226 22:22:34.364073  766058 command_runner.go:130] >       },
	I1226 22:22:34.364080  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.364087  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.364092  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.364099  766058 command_runner.go:130] >     },
	I1226 22:22:34.364103  766058 command_runner.go:130] >     {
	I1226 22:22:34.364111  766058 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1226 22:22:34.364116  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.364125  766058 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1226 22:22:34.364130  766058 command_runner.go:130] >       ],
	I1226 22:22:34.364135  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.364144  766058 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1226 22:22:34.364156  766058 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1226 22:22:34.364161  766058 command_runner.go:130] >       ],
	I1226 22:22:34.364171  766058 command_runner.go:130] >       "size": "69992343",
	I1226 22:22:34.364179  766058 command_runner.go:130] >       "uid": null,
	I1226 22:22:34.364185  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.364189  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.364198  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.364206  766058 command_runner.go:130] >     },
	I1226 22:22:34.364211  766058 command_runner.go:130] >     {
	I1226 22:22:34.364220  766058 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1226 22:22:34.364229  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.364235  766058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1226 22:22:34.364242  766058 command_runner.go:130] >       ],
	I1226 22:22:34.364247  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.364269  766058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1226 22:22:34.364282  766058 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1226 22:22:34.364290  766058 command_runner.go:130] >       ],
	I1226 22:22:34.364298  766058 command_runner.go:130] >       "size": "59253556",
	I1226 22:22:34.364303  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.364308  766058 command_runner.go:130] >         "value": "0"
	I1226 22:22:34.364314  766058 command_runner.go:130] >       },
	I1226 22:22:34.364320  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.364327  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.364332  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.364344  766058 command_runner.go:130] >     },
	I1226 22:22:34.364348  766058 command_runner.go:130] >     {
	I1226 22:22:34.364356  766058 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1226 22:22:34.364364  766058 command_runner.go:130] >       "repoTags": [
	I1226 22:22:34.364369  766058 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1226 22:22:34.364374  766058 command_runner.go:130] >       ],
	I1226 22:22:34.364381  766058 command_runner.go:130] >       "repoDigests": [
	I1226 22:22:34.364390  766058 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1226 22:22:34.364401  766058 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1226 22:22:34.364406  766058 command_runner.go:130] >       ],
	I1226 22:22:34.364411  766058 command_runner.go:130] >       "size": "520014",
	I1226 22:22:34.364418  766058 command_runner.go:130] >       "uid": {
	I1226 22:22:34.364426  766058 command_runner.go:130] >         "value": "65535"
	I1226 22:22:34.364430  766058 command_runner.go:130] >       },
	I1226 22:22:34.364439  766058 command_runner.go:130] >       "username": "",
	I1226 22:22:34.364450  766058 command_runner.go:130] >       "spec": null,
	I1226 22:22:34.364455  766058 command_runner.go:130] >       "pinned": false
	I1226 22:22:34.364460  766058 command_runner.go:130] >     }
	I1226 22:22:34.364466  766058 command_runner.go:130] >   ]
	I1226 22:22:34.364471  766058 command_runner.go:130] > }
	I1226 22:22:34.367078  766058 crio.go:496] all images are preloaded for cri-o runtime.
	I1226 22:22:34.367127  766058 cache_images.go:84] Images are preloaded, skipping loading
	I1226 22:22:34.367205  766058 ssh_runner.go:195] Run: crio config
	I1226 22:22:34.419538  766058 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1226 22:22:34.419567  766058 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1226 22:22:34.419577  766058 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1226 22:22:34.419583  766058 command_runner.go:130] > #
	I1226 22:22:34.419592  766058 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1226 22:22:34.419600  766058 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1226 22:22:34.419622  766058 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1226 22:22:34.419644  766058 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1226 22:22:34.419655  766058 command_runner.go:130] > # reload'.
	I1226 22:22:34.419663  766058 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1226 22:22:34.419677  766058 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1226 22:22:34.419734  766058 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1226 22:22:34.419774  766058 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1226 22:22:34.419819  766058 command_runner.go:130] > [crio]
	I1226 22:22:34.419826  766058 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1226 22:22:34.419836  766058 command_runner.go:130] > # containers images, in this directory.
	I1226 22:22:34.419847  766058 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1226 22:22:34.419859  766058 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1226 22:22:34.420056  766058 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1226 22:22:34.420080  766058 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1226 22:22:34.420089  766058 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1226 22:22:34.420100  766058 command_runner.go:130] > # storage_driver = "vfs"
	I1226 22:22:34.420107  766058 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1226 22:22:34.420116  766058 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1226 22:22:34.420123  766058 command_runner.go:130] > # storage_option = [
	I1226 22:22:34.420299  766058 command_runner.go:130] > # ]
	I1226 22:22:34.420316  766058 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1226 22:22:34.420325  766058 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1226 22:22:34.420335  766058 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1226 22:22:34.420343  766058 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1226 22:22:34.420350  766058 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1226 22:22:34.420363  766058 command_runner.go:130] > # always happen on a node reboot
	I1226 22:22:34.420374  766058 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1226 22:22:34.420382  766058 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1226 22:22:34.420392  766058 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1226 22:22:34.420405  766058 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1226 22:22:34.420416  766058 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1226 22:22:34.420426  766058 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1226 22:22:34.420440  766058 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1226 22:22:34.420446  766058 command_runner.go:130] > # internal_wipe = true
	I1226 22:22:34.420453  766058 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1226 22:22:34.420464  766058 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1226 22:22:34.420471  766058 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1226 22:22:34.420481  766058 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1226 22:22:34.420491  766058 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1226 22:22:34.420499  766058 command_runner.go:130] > [crio.api]
	I1226 22:22:34.420506  766058 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1226 22:22:34.420511  766058 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1226 22:22:34.420536  766058 command_runner.go:130] > # IP address on which the stream server will listen.
	I1226 22:22:34.420548  766058 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1226 22:22:34.420562  766058 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1226 22:22:34.420568  766058 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1226 22:22:34.420577  766058 command_runner.go:130] > # stream_port = "0"
	I1226 22:22:34.420584  766058 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1226 22:22:34.420594  766058 command_runner.go:130] > # stream_enable_tls = false
	I1226 22:22:34.420601  766058 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1226 22:22:34.420607  766058 command_runner.go:130] > # stream_idle_timeout = ""
	I1226 22:22:34.420618  766058 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1226 22:22:34.420626  766058 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1226 22:22:34.420631  766058 command_runner.go:130] > # minutes.
	I1226 22:22:34.420641  766058 command_runner.go:130] > # stream_tls_cert = ""
	I1226 22:22:34.420648  766058 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1226 22:22:34.420660  766058 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1226 22:22:34.420666  766058 command_runner.go:130] > # stream_tls_key = ""
	I1226 22:22:34.420677  766058 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1226 22:22:34.420686  766058 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1226 22:22:34.420696  766058 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1226 22:22:34.420703  766058 command_runner.go:130] > # stream_tls_ca = ""
	I1226 22:22:34.420713  766058 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1226 22:22:34.420725  766058 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1226 22:22:34.420734  766058 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1226 22:22:34.420744  766058 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1226 22:22:34.420763  766058 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1226 22:22:34.420775  766058 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1226 22:22:34.420780  766058 command_runner.go:130] > [crio.runtime]
	I1226 22:22:34.420787  766058 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1226 22:22:34.420794  766058 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1226 22:22:34.420801  766058 command_runner.go:130] > # "nofile=1024:2048"
	I1226 22:22:34.420809  766058 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1226 22:22:34.420818  766058 command_runner.go:130] > # default_ulimits = [
	I1226 22:22:34.420823  766058 command_runner.go:130] > # ]
	I1226 22:22:34.420830  766058 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1226 22:22:34.420841  766058 command_runner.go:130] > # no_pivot = false
	I1226 22:22:34.420848  766058 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1226 22:22:34.420860  766058 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1226 22:22:34.420868  766058 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1226 22:22:34.420876  766058 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1226 22:22:34.420882  766058 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1226 22:22:34.420891  766058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1226 22:22:34.420900  766058 command_runner.go:130] > # conmon = ""
	I1226 22:22:34.420905  766058 command_runner.go:130] > # Cgroup setting for conmon
	I1226 22:22:34.420914  766058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1226 22:22:34.420923  766058 command_runner.go:130] > conmon_cgroup = "pod"
	I1226 22:22:34.420931  766058 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1226 22:22:34.420944  766058 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1226 22:22:34.420953  766058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1226 22:22:34.420958  766058 command_runner.go:130] > # conmon_env = [
	I1226 22:22:34.420965  766058 command_runner.go:130] > # ]
	I1226 22:22:34.420972  766058 command_runner.go:130] > # Additional environment variables to set for all the
	I1226 22:22:34.420982  766058 command_runner.go:130] > # containers. These are overridden if set in the
	I1226 22:22:34.420990  766058 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1226 22:22:34.420998  766058 command_runner.go:130] > # default_env = [
	I1226 22:22:34.421003  766058 command_runner.go:130] > # ]
	I1226 22:22:34.421013  766058 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1226 22:22:34.421197  766058 command_runner.go:130] > # selinux = false
	I1226 22:22:34.421214  766058 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1226 22:22:34.421222  766058 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1226 22:22:34.421230  766058 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1226 22:22:34.421242  766058 command_runner.go:130] > # seccomp_profile = ""
	I1226 22:22:34.421253  766058 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1226 22:22:34.421265  766058 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1226 22:22:34.421273  766058 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1226 22:22:34.421282  766058 command_runner.go:130] > # which might increase security.
	I1226 22:22:34.421288  766058 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1226 22:22:34.421296  766058 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1226 22:22:34.421307  766058 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1226 22:22:34.421315  766058 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1226 22:22:34.421326  766058 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1226 22:22:34.421332  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:22:34.421338  766058 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1226 22:22:34.421350  766058 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1226 22:22:34.421356  766058 command_runner.go:130] > # the cgroup blockio controller.
	I1226 22:22:34.421365  766058 command_runner.go:130] > # blockio_config_file = ""
	I1226 22:22:34.421373  766058 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1226 22:22:34.421381  766058 command_runner.go:130] > # irqbalance daemon.
	I1226 22:22:34.421387  766058 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1226 22:22:34.421395  766058 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1226 22:22:34.421407  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:22:34.421412  766058 command_runner.go:130] > # rdt_config_file = ""
	I1226 22:22:34.421419  766058 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1226 22:22:34.421428  766058 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1226 22:22:34.421436  766058 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1226 22:22:34.421444  766058 command_runner.go:130] > # separate_pull_cgroup = ""
	I1226 22:22:34.421452  766058 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1226 22:22:34.421465  766058 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1226 22:22:34.421470  766058 command_runner.go:130] > # will be added.
	I1226 22:22:34.421476  766058 command_runner.go:130] > # default_capabilities = [
	I1226 22:22:34.421486  766058 command_runner.go:130] > # 	"CHOWN",
	I1226 22:22:34.421491  766058 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1226 22:22:34.421495  766058 command_runner.go:130] > # 	"FSETID",
	I1226 22:22:34.421501  766058 command_runner.go:130] > # 	"FOWNER",
	I1226 22:22:34.421510  766058 command_runner.go:130] > # 	"SETGID",
	I1226 22:22:34.421515  766058 command_runner.go:130] > # 	"SETUID",
	I1226 22:22:34.421519  766058 command_runner.go:130] > # 	"SETPCAP",
	I1226 22:22:34.421681  766058 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1226 22:22:34.421699  766058 command_runner.go:130] > # 	"KILL",
	I1226 22:22:34.421705  766058 command_runner.go:130] > # ]
	I1226 22:22:34.421714  766058 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1226 22:22:34.421726  766058 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1226 22:22:34.421733  766058 command_runner.go:130] > # add_inheritable_capabilities = true
	I1226 22:22:34.421749  766058 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1226 22:22:34.421756  766058 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1226 22:22:34.421766  766058 command_runner.go:130] > # default_sysctls = [
	I1226 22:22:34.421771  766058 command_runner.go:130] > # ]
	I1226 22:22:34.421777  766058 command_runner.go:130] > # List of devices on the host that a
	I1226 22:22:34.421788  766058 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1226 22:22:34.421794  766058 command_runner.go:130] > # allowed_devices = [
	I1226 22:22:34.421803  766058 command_runner.go:130] > # 	"/dev/fuse",
	I1226 22:22:34.421807  766058 command_runner.go:130] > # ]
	I1226 22:22:34.421814  766058 command_runner.go:130] > # List of additional devices. specified as
	I1226 22:22:34.421847  766058 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1226 22:22:34.421860  766058 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1226 22:22:34.421868  766058 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1226 22:22:34.421878  766058 command_runner.go:130] > # additional_devices = [
	I1226 22:22:34.421883  766058 command_runner.go:130] > # ]
	I1226 22:22:34.421894  766058 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1226 22:22:34.421900  766058 command_runner.go:130] > # cdi_spec_dirs = [
	I1226 22:22:34.421909  766058 command_runner.go:130] > # 	"/etc/cdi",
	I1226 22:22:34.421917  766058 command_runner.go:130] > # 	"/var/run/cdi",
	I1226 22:22:34.421922  766058 command_runner.go:130] > # ]
	I1226 22:22:34.421933  766058 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1226 22:22:34.421945  766058 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1226 22:22:34.421951  766058 command_runner.go:130] > # Defaults to false.
	I1226 22:22:34.421962  766058 command_runner.go:130] > # device_ownership_from_security_context = false
	I1226 22:22:34.421970  766058 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1226 22:22:34.421982  766058 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1226 22:22:34.421987  766058 command_runner.go:130] > # hooks_dir = [
	I1226 22:22:34.421997  766058 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1226 22:22:34.422001  766058 command_runner.go:130] > # ]
	I1226 22:22:34.422011  766058 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1226 22:22:34.422024  766058 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1226 22:22:34.422030  766058 command_runner.go:130] > # its default mounts from the following two files:
	I1226 22:22:34.422038  766058 command_runner.go:130] > #
	I1226 22:22:34.422045  766058 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1226 22:22:34.422053  766058 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1226 22:22:34.422063  766058 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1226 22:22:34.422067  766058 command_runner.go:130] > #
	I1226 22:22:34.422075  766058 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1226 22:22:34.422087  766058 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1226 22:22:34.422094  766058 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1226 22:22:34.422102  766058 command_runner.go:130] > #      only add mounts it finds in this file.
	I1226 22:22:34.422106  766058 command_runner.go:130] > #
	I1226 22:22:34.422117  766058 command_runner.go:130] > # default_mounts_file = ""
	I1226 22:22:34.422123  766058 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1226 22:22:34.422136  766058 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1226 22:22:34.422141  766058 command_runner.go:130] > # pids_limit = 0
	I1226 22:22:34.422148  766058 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1226 22:22:34.422161  766058 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1226 22:22:34.422169  766058 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1226 22:22:34.422182  766058 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1226 22:22:34.422187  766058 command_runner.go:130] > # log_size_max = -1
	I1226 22:22:34.422201  766058 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1226 22:22:34.422374  766058 command_runner.go:130] > # log_to_journald = false
	I1226 22:22:34.422396  766058 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1226 22:22:34.422426  766058 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1226 22:22:34.422440  766058 command_runner.go:130] > # Path to directory for container attach sockets.
	I1226 22:22:34.422446  766058 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1226 22:22:34.422458  766058 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1226 22:22:34.422463  766058 command_runner.go:130] > # bind_mount_prefix = ""
	I1226 22:22:34.422474  766058 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1226 22:22:34.422491  766058 command_runner.go:130] > # read_only = false
	I1226 22:22:34.422505  766058 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1226 22:22:34.422513  766058 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1226 22:22:34.422523  766058 command_runner.go:130] > # live configuration reload.
	I1226 22:22:34.422529  766058 command_runner.go:130] > # log_level = "info"
	I1226 22:22:34.422536  766058 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1226 22:22:34.422545  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:22:34.422550  766058 command_runner.go:130] > # log_filter = ""
	I1226 22:22:34.422580  766058 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1226 22:22:34.422596  766058 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1226 22:22:34.422601  766058 command_runner.go:130] > # separated by comma.
	I1226 22:22:34.422613  766058 command_runner.go:130] > # uid_mappings = ""
	I1226 22:22:34.422621  766058 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1226 22:22:34.422632  766058 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1226 22:22:34.422637  766058 command_runner.go:130] > # separated by comma.
	I1226 22:22:34.422655  766058 command_runner.go:130] > # gid_mappings = ""
	I1226 22:22:34.422665  766058 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1226 22:22:34.422680  766058 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1226 22:22:34.422694  766058 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1226 22:22:34.422699  766058 command_runner.go:130] > # minimum_mappable_uid = -1
	I1226 22:22:34.422711  766058 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1226 22:22:34.422719  766058 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1226 22:22:34.422740  766058 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1226 22:22:34.422760  766058 command_runner.go:130] > # minimum_mappable_gid = -1
	I1226 22:22:34.422773  766058 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1226 22:22:34.422781  766058 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1226 22:22:34.422792  766058 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1226 22:22:34.422985  766058 command_runner.go:130] > # ctr_stop_timeout = 30
	I1226 22:22:34.423002  766058 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1226 22:22:34.423028  766058 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1226 22:22:34.423043  766058 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1226 22:22:34.423051  766058 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1226 22:22:34.423061  766058 command_runner.go:130] > # drop_infra_ctr = true
	I1226 22:22:34.423069  766058 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1226 22:22:34.423079  766058 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1226 22:22:34.423107  766058 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1226 22:22:34.423135  766058 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1226 22:22:34.423142  766058 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1226 22:22:34.423152  766058 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1226 22:22:34.423158  766058 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1226 22:22:34.423181  766058 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1226 22:22:34.423201  766058 command_runner.go:130] > # pinns_path = ""
	I1226 22:22:34.423214  766058 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1226 22:22:34.423225  766058 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1226 22:22:34.423233  766058 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1226 22:22:34.423242  766058 command_runner.go:130] > # default_runtime = "runc"
	I1226 22:22:34.423248  766058 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1226 22:22:34.423268  766058 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1226 22:22:34.423294  766058 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1226 22:22:34.423307  766058 command_runner.go:130] > # creation as a file is not desired either.
	I1226 22:22:34.423318  766058 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1226 22:22:34.423328  766058 command_runner.go:130] > # the hostname is being managed dynamically.
	I1226 22:22:34.423334  766058 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1226 22:22:34.423338  766058 command_runner.go:130] > # ]
	I1226 22:22:34.423346  766058 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1226 22:22:34.423357  766058 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1226 22:22:34.423375  766058 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1226 22:22:34.423388  766058 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1226 22:22:34.423392  766058 command_runner.go:130] > #
	I1226 22:22:34.423413  766058 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1226 22:22:34.423420  766058 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1226 22:22:34.423425  766058 command_runner.go:130] > #  runtime_type = "oci"
	I1226 22:22:34.423436  766058 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1226 22:22:34.423442  766058 command_runner.go:130] > #  privileged_without_host_devices = false
	I1226 22:22:34.423452  766058 command_runner.go:130] > #  allowed_annotations = []
	I1226 22:22:34.423460  766058 command_runner.go:130] > # Where:
	I1226 22:22:34.423481  766058 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1226 22:22:34.423498  766058 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1226 22:22:34.423506  766058 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1226 22:22:34.423518  766058 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1226 22:22:34.423522  766058 command_runner.go:130] > #   in $PATH.
	I1226 22:22:34.423535  766058 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1226 22:22:34.423541  766058 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1226 22:22:34.423569  766058 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1226 22:22:34.423579  766058 command_runner.go:130] > #   state.
	I1226 22:22:34.423587  766058 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1226 22:22:34.423599  766058 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1226 22:22:34.423607  766058 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1226 22:22:34.423617  766058 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1226 22:22:34.423625  766058 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1226 22:22:34.423645  766058 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1226 22:22:34.423658  766058 command_runner.go:130] > #   The currently recognized values are:
	I1226 22:22:34.423667  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1226 22:22:34.423692  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1226 22:22:34.423705  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1226 22:22:34.423712  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1226 22:22:34.423725  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1226 22:22:34.423733  766058 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1226 22:22:34.423741  766058 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1226 22:22:34.423758  766058 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1226 22:22:34.423771  766058 command_runner.go:130] > #   should be moved to the container's cgroup
	I1226 22:22:34.423777  766058 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1226 22:22:34.423792  766058 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1226 22:22:34.423803  766058 command_runner.go:130] > runtime_type = "oci"
	I1226 22:22:34.423808  766058 command_runner.go:130] > runtime_root = "/run/runc"
	I1226 22:22:34.423814  766058 command_runner.go:130] > runtime_config_path = ""
	I1226 22:22:34.423818  766058 command_runner.go:130] > monitor_path = ""
	I1226 22:22:34.423823  766058 command_runner.go:130] > monitor_cgroup = ""
	I1226 22:22:34.423833  766058 command_runner.go:130] > monitor_exec_cgroup = ""
	I1226 22:22:34.423893  766058 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1226 22:22:34.423903  766058 command_runner.go:130] > # running containers
	I1226 22:22:34.423913  766058 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1226 22:22:34.423934  766058 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1226 22:22:34.423951  766058 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1226 22:22:34.423959  766058 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1226 22:22:34.423965  766058 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1226 22:22:34.423971  766058 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1226 22:22:34.423979  766058 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1226 22:22:34.423984  766058 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1226 22:22:34.423991  766058 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1226 22:22:34.424008  766058 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1226 22:22:34.424023  766058 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1226 22:22:34.424030  766058 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1226 22:22:34.424041  766058 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1226 22:22:34.424050  766058 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1226 22:22:34.424062  766058 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1226 22:22:34.424069  766058 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1226 22:22:34.424096  766058 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1226 22:22:34.424117  766058 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1226 22:22:34.424133  766058 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1226 22:22:34.424143  766058 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1226 22:22:34.424148  766058 command_runner.go:130] > # Example:
	I1226 22:22:34.424154  766058 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1226 22:22:34.424162  766058 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1226 22:22:34.424187  766058 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1226 22:22:34.424201  766058 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1226 22:22:34.424207  766058 command_runner.go:130] > # cpuset = 0
	I1226 22:22:34.424215  766058 command_runner.go:130] > # cpushares = "0-1"
	I1226 22:22:34.424220  766058 command_runner.go:130] > # Where:
	I1226 22:22:34.424226  766058 command_runner.go:130] > # The workload name is workload-type.
	I1226 22:22:34.424234  766058 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1226 22:22:34.424243  766058 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1226 22:22:34.424250  766058 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1226 22:22:34.424275  766058 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1226 22:22:34.424302  766058 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1226 22:22:34.424308  766058 command_runner.go:130] > # 
	I1226 22:22:34.424326  766058 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1226 22:22:34.424334  766058 command_runner.go:130] > #
	I1226 22:22:34.424347  766058 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1226 22:22:34.424365  766058 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1226 22:22:34.424380  766058 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1226 22:22:34.424388  766058 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1226 22:22:34.424403  766058 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1226 22:22:34.424416  766058 command_runner.go:130] > [crio.image]
	I1226 22:22:34.424426  766058 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1226 22:22:34.424434  766058 command_runner.go:130] > # default_transport = "docker://"
	I1226 22:22:34.424441  766058 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1226 22:22:34.424452  766058 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1226 22:22:34.424457  766058 command_runner.go:130] > # global_auth_file = ""
	I1226 22:22:34.424463  766058 command_runner.go:130] > # The image used to instantiate infra containers.
	I1226 22:22:34.424480  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:22:34.424783  766058 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1226 22:22:34.424800  766058 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1226 22:22:34.424821  766058 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1226 22:22:34.424829  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:22:34.424841  766058 command_runner.go:130] > # pause_image_auth_file = ""
	I1226 22:22:34.424848  766058 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1226 22:22:34.424856  766058 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1226 22:22:34.424863  766058 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1226 22:22:34.424871  766058 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1226 22:22:34.424876  766058 command_runner.go:130] > # pause_command = "/pause"
	I1226 22:22:34.424883  766058 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1226 22:22:34.424897  766058 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1226 22:22:34.424906  766058 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1226 22:22:34.424913  766058 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1226 22:22:34.424920  766058 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1226 22:22:34.424925  766058 command_runner.go:130] > # signature_policy = ""
	I1226 22:22:34.424932  766058 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1226 22:22:34.424940  766058 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1226 22:22:34.424945  766058 command_runner.go:130] > # changing them here.
	I1226 22:22:34.424955  766058 command_runner.go:130] > # insecure_registries = [
	I1226 22:22:34.424960  766058 command_runner.go:130] > # ]
	I1226 22:22:34.424974  766058 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1226 22:22:34.424983  766058 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1226 22:22:34.424990  766058 command_runner.go:130] > # image_volumes = "mkdir"
	I1226 22:22:34.424997  766058 command_runner.go:130] > # Temporary directory to use for storing big files
	I1226 22:22:34.425002  766058 command_runner.go:130] > # big_files_temporary_dir = ""
	I1226 22:22:34.425009  766058 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1226 22:22:34.425014  766058 command_runner.go:130] > # CNI plugins.
	I1226 22:22:34.425018  766058 command_runner.go:130] > [crio.network]
	I1226 22:22:34.425025  766058 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1226 22:22:34.425032  766058 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1226 22:22:34.425043  766058 command_runner.go:130] > # cni_default_network = ""
	I1226 22:22:34.425050  766058 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1226 22:22:34.425056  766058 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1226 22:22:34.425062  766058 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1226 22:22:34.425067  766058 command_runner.go:130] > # plugin_dirs = [
	I1226 22:22:34.425072  766058 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1226 22:22:34.425075  766058 command_runner.go:130] > # ]
	I1226 22:22:34.425082  766058 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1226 22:22:34.425088  766058 command_runner.go:130] > [crio.metrics]
	I1226 22:22:34.425096  766058 command_runner.go:130] > # Globally enable or disable metrics support.
	I1226 22:22:34.425101  766058 command_runner.go:130] > # enable_metrics = false
	I1226 22:22:34.425107  766058 command_runner.go:130] > # Specify enabled metrics collectors.
	I1226 22:22:34.425118  766058 command_runner.go:130] > # Per default all metrics are enabled.
	I1226 22:22:34.425126  766058 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1226 22:22:34.425133  766058 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1226 22:22:34.425142  766058 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1226 22:22:34.425146  766058 command_runner.go:130] > # metrics_collectors = [
	I1226 22:22:34.425151  766058 command_runner.go:130] > # 	"operations",
	I1226 22:22:34.425157  766058 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1226 22:22:34.425356  766058 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1226 22:22:34.425366  766058 command_runner.go:130] > # 	"operations_errors",
	I1226 22:22:34.425372  766058 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1226 22:22:34.425377  766058 command_runner.go:130] > # 	"image_pulls_by_name",
	I1226 22:22:34.425383  766058 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1226 22:22:34.425388  766058 command_runner.go:130] > # 	"image_pulls_failures",
	I1226 22:22:34.425393  766058 command_runner.go:130] > # 	"image_pulls_successes",
	I1226 22:22:34.425399  766058 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1226 22:22:34.425416  766058 command_runner.go:130] > # 	"image_layer_reuse",
	I1226 22:22:34.425421  766058 command_runner.go:130] > # 	"containers_oom_total",
	I1226 22:22:34.425433  766058 command_runner.go:130] > # 	"containers_oom",
	I1226 22:22:34.425438  766058 command_runner.go:130] > # 	"processes_defunct",
	I1226 22:22:34.425443  766058 command_runner.go:130] > # 	"operations_total",
	I1226 22:22:34.425451  766058 command_runner.go:130] > # 	"operations_latency_seconds",
	I1226 22:22:34.425457  766058 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1226 22:22:34.425462  766058 command_runner.go:130] > # 	"operations_errors_total",
	I1226 22:22:34.425467  766058 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1226 22:22:34.425473  766058 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1226 22:22:34.425486  766058 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1226 22:22:34.425492  766058 command_runner.go:130] > # 	"image_pulls_success_total",
	I1226 22:22:34.425498  766058 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1226 22:22:34.425503  766058 command_runner.go:130] > # 	"containers_oom_count_total",
	I1226 22:22:34.425507  766058 command_runner.go:130] > # ]
	I1226 22:22:34.425514  766058 command_runner.go:130] > # The port on which the metrics server will listen.
	I1226 22:22:34.425519  766058 command_runner.go:130] > # metrics_port = 9090
	I1226 22:22:34.425525  766058 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1226 22:22:34.425530  766058 command_runner.go:130] > # metrics_socket = ""
	I1226 22:22:34.425536  766058 command_runner.go:130] > # The certificate for the secure metrics server.
	I1226 22:22:34.425543  766058 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1226 22:22:34.425551  766058 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1226 22:22:34.425568  766058 command_runner.go:130] > # certificate on any modification event.
	I1226 22:22:34.425580  766058 command_runner.go:130] > # metrics_cert = ""
	I1226 22:22:34.425586  766058 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1226 22:22:34.425592  766058 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1226 22:22:34.425597  766058 command_runner.go:130] > # metrics_key = ""
	I1226 22:22:34.425604  766058 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1226 22:22:34.425609  766058 command_runner.go:130] > [crio.tracing]
	I1226 22:22:34.425618  766058 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1226 22:22:34.425623  766058 command_runner.go:130] > # enable_tracing = false
	I1226 22:22:34.425662  766058 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1226 22:22:34.425672  766058 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1226 22:22:34.425678  766058 command_runner.go:130] > # Number of samples to collect per million spans.
	I1226 22:22:34.425684  766058 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1226 22:22:34.425691  766058 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1226 22:22:34.425696  766058 command_runner.go:130] > [crio.stats]
	I1226 22:22:34.425732  766058 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1226 22:22:34.425743  766058 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1226 22:22:34.425748  766058 command_runner.go:130] > # stats_collection_period = 0
	I1226 22:22:34.426452  766058 command_runner.go:130] ! time="2023-12-26 22:22:34.414676531Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1226 22:22:34.426477  766058 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1226 22:22:34.426552  766058 cni.go:84] Creating CNI manager for ""
	I1226 22:22:34.426560  766058 cni.go:136] 1 nodes found, recommending kindnet
	I1226 22:22:34.426590  766058 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 22:22:34.426621  766058 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-772557 NodeName:multinode-772557 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 22:22:34.426764  766058 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-772557"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 22:22:34.426826  766058 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-772557 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-772557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 22:22:34.426893  766058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 22:22:34.437524  766058 command_runner.go:130] > kubeadm
	I1226 22:22:34.437545  766058 command_runner.go:130] > kubectl
	I1226 22:22:34.437558  766058 command_runner.go:130] > kubelet
	I1226 22:22:34.437602  766058 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 22:22:34.437688  766058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1226 22:22:34.448055  766058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1226 22:22:34.469438  766058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 22:22:34.490721  766058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1226 22:22:34.512235  766058 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1226 22:22:34.516634  766058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:22:34.530199  766058 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557 for IP: 192.168.58.2
	I1226 22:22:34.530244  766058 certs.go:190] acquiring lock for shared ca certs: {Name:mke6488a150c186a525017f74b8a69a9f5240d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:22:34.530384  766058 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key
	I1226 22:22:34.530434  766058 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key
	I1226 22:22:34.530482  766058 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.key
	I1226 22:22:34.530498  766058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.crt with IP's: []
	I1226 22:22:35.092921  766058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.crt ...
	I1226 22:22:35.092959  766058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.crt: {Name:mka5d6cdd70b30bfca894045ec9ddc93801f8cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:22:35.093227  766058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.key ...
	I1226 22:22:35.093249  766058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.key: {Name:mkfd8a5e0d112480b61736591ce07c843191ce0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:22:35.093356  766058 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.key.cee25041
	I1226 22:22:35.093379  766058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1226 22:22:35.500445  766058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.crt.cee25041 ...
	I1226 22:22:35.500477  766058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.crt.cee25041: {Name:mk4cef80effaaee55974b7a69399c969e9a31fc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:22:35.500674  766058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.key.cee25041 ...
	I1226 22:22:35.500689  766058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.key.cee25041: {Name:mk656eeaf4c2e23c6e08b4a144e3abe042f8ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:22:35.500772  766058 certs.go:337] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.crt
	I1226 22:22:35.500862  766058 certs.go:341] copying /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.key
	I1226 22:22:35.500929  766058 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.key
	I1226 22:22:35.500945  766058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.crt with IP's: []
	I1226 22:22:36.295428  766058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.crt ...
	I1226 22:22:36.295462  766058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.crt: {Name:mkcf1fbcc00ac08fd801dc7e2c9c4f4f66c236ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:22:36.295655  766058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.key ...
	I1226 22:22:36.295670  766058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.key: {Name:mkff444c30c32d539aa4934ab48de317a107ce88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:22:36.295748  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1226 22:22:36.295773  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1226 22:22:36.295791  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1226 22:22:36.295807  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1226 22:22:36.295818  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 22:22:36.295834  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1226 22:22:36.295845  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 22:22:36.295860  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 22:22:36.295933  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem (1338 bytes)
	W1226 22:22:36.295970  766058 certs.go:433] ignoring /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036_empty.pem, impossibly tiny 0 bytes
	I1226 22:22:36.295985  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem (1675 bytes)
	I1226 22:22:36.296018  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem (1082 bytes)
	I1226 22:22:36.296048  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem (1123 bytes)
	I1226 22:22:36.296081  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem (1679 bytes)
	I1226 22:22:36.296132  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:22:36.296168  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:22:36.296185  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem -> /usr/share/ca-certificates/703036.pem
	I1226 22:22:36.296200  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> /usr/share/ca-certificates/7030362.pem
	I1226 22:22:36.296866  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1226 22:22:36.326535  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1226 22:22:36.355286  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1226 22:22:36.383386  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1226 22:22:36.411608  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 22:22:36.440104  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 22:22:36.468668  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 22:22:36.497582  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 22:22:36.527254  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 22:22:36.556342  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem --> /usr/share/ca-certificates/703036.pem (1338 bytes)
	I1226 22:22:36.586330  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /usr/share/ca-certificates/7030362.pem (1708 bytes)
	I1226 22:22:36.615343  766058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1226 22:22:36.637482  766058 ssh_runner.go:195] Run: openssl version
	I1226 22:22:36.644530  766058 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1226 22:22:36.644609  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 22:22:36.656217  766058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:22:36.660691  766058 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:22:36.660951  766058 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:22:36.661025  766058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:22:36.669531  766058 command_runner.go:130] > b5213941
	I1226 22:22:36.669667  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 22:22:36.681183  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/703036.pem && ln -fs /usr/share/ca-certificates/703036.pem /etc/ssl/certs/703036.pem"
	I1226 22:22:36.692891  766058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/703036.pem
	I1226 22:22:36.697549  766058 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 21:58 /usr/share/ca-certificates/703036.pem
	I1226 22:22:36.697856  766058 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 21:58 /usr/share/ca-certificates/703036.pem
	I1226 22:22:36.697915  766058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/703036.pem
	I1226 22:22:36.706254  766058 command_runner.go:130] > 51391683
	I1226 22:22:36.706659  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/703036.pem /etc/ssl/certs/51391683.0"
	I1226 22:22:36.718534  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7030362.pem && ln -fs /usr/share/ca-certificates/7030362.pem /etc/ssl/certs/7030362.pem"
	I1226 22:22:36.730282  766058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7030362.pem
	I1226 22:22:36.734975  766058 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 21:58 /usr/share/ca-certificates/7030362.pem
	I1226 22:22:36.735021  766058 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 21:58 /usr/share/ca-certificates/7030362.pem
	I1226 22:22:36.735076  766058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7030362.pem
	I1226 22:22:36.743931  766058 command_runner.go:130] > 3ec20f2e
	I1226 22:22:36.744016  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7030362.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 22:22:36.755700  766058 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 22:22:36.760205  766058 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:22:36.760242  766058 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:22:36.760322  766058 kubeadm.go:404] StartCluster: {Name:multinode-772557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-772557 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:22:36.760420  766058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1226 22:22:36.760485  766058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1226 22:22:36.808056  766058 cri.go:89] found id: ""
	I1226 22:22:36.808139  766058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1226 22:22:36.818987  766058 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1226 22:22:36.819015  766058 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1226 22:22:36.819025  766058 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1226 22:22:36.819142  766058 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1226 22:22:36.829985  766058 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1226 22:22:36.830096  766058 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1226 22:22:36.841051  766058 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1226 22:22:36.841079  766058 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1226 22:22:36.841090  766058 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1226 22:22:36.841099  766058 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 22:22:36.841138  766058 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1226 22:22:36.841174  766058 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1226 22:22:36.943005  766058 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1226 22:22:36.943040  766058 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1226 22:22:37.040733  766058 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:22:37.040769  766058 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:22:53.439071  766058 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1226 22:22:53.439107  766058 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1226 22:22:53.439147  766058 kubeadm.go:322] [preflight] Running pre-flight checks
	I1226 22:22:53.439153  766058 command_runner.go:130] > [preflight] Running pre-flight checks
	I1226 22:22:53.439234  766058 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1226 22:22:53.439239  766058 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1226 22:22:53.439290  766058 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1226 22:22:53.439295  766058 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I1226 22:22:53.439327  766058 kubeadm.go:322] OS: Linux
	I1226 22:22:53.439331  766058 command_runner.go:130] > OS: Linux
	I1226 22:22:53.439373  766058 kubeadm.go:322] CGROUPS_CPU: enabled
	I1226 22:22:53.439378  766058 command_runner.go:130] > CGROUPS_CPU: enabled
	I1226 22:22:53.439422  766058 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1226 22:22:53.439427  766058 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1226 22:22:53.439471  766058 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1226 22:22:53.439476  766058 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1226 22:22:53.439520  766058 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1226 22:22:53.439533  766058 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1226 22:22:53.439579  766058 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1226 22:22:53.439584  766058 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1226 22:22:53.439629  766058 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1226 22:22:53.439634  766058 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1226 22:22:53.439675  766058 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1226 22:22:53.439680  766058 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1226 22:22:53.439724  766058 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1226 22:22:53.439728  766058 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1226 22:22:53.439770  766058 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1226 22:22:53.439775  766058 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1226 22:22:53.439842  766058 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 22:22:53.439847  766058 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1226 22:22:53.439934  766058 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 22:22:53.439938  766058 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1226 22:22:53.440026  766058 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 22:22:53.440034  766058 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1226 22:22:53.440092  766058 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 22:22:53.442417  766058 out.go:204]   - Generating certificates and keys ...
	I1226 22:22:53.440214  766058 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1226 22:22:53.442630  766058 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1226 22:22:53.442656  766058 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1226 22:22:53.442765  766058 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1226 22:22:53.442795  766058 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1226 22:22:53.442909  766058 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 22:22:53.442934  766058 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1226 22:22:53.443030  766058 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1226 22:22:53.443052  766058 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1226 22:22:53.443157  766058 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1226 22:22:53.443189  766058 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1226 22:22:53.443288  766058 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1226 22:22:53.443313  766058 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1226 22:22:53.443411  766058 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1226 22:22:53.443421  766058 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1226 22:22:53.443550  766058 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-772557] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1226 22:22:53.443556  766058 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-772557] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1226 22:22:53.443608  766058 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1226 22:22:53.443613  766058 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1226 22:22:53.443731  766058 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-772557] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1226 22:22:53.443736  766058 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-772557] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1226 22:22:53.443802  766058 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 22:22:53.443807  766058 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1226 22:22:53.443869  766058 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 22:22:53.443874  766058 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1226 22:22:53.443918  766058 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1226 22:22:53.443923  766058 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1226 22:22:53.443979  766058 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 22:22:53.443983  766058 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1226 22:22:53.444036  766058 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 22:22:53.444052  766058 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1226 22:22:53.444105  766058 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 22:22:53.444110  766058 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1226 22:22:53.444177  766058 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 22:22:53.444182  766058 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1226 22:22:53.444237  766058 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 22:22:53.444242  766058 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1226 22:22:53.444323  766058 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 22:22:53.444328  766058 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1226 22:22:53.444394  766058 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 22:22:53.447370  766058 out.go:204]   - Booting up control plane ...
	I1226 22:22:53.444685  766058 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1226 22:22:53.447481  766058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 22:22:53.447491  766058 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1226 22:22:53.447569  766058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 22:22:53.447574  766058 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1226 22:22:53.447635  766058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 22:22:53.447646  766058 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1226 22:22:53.447762  766058 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:22:53.447768  766058 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:22:53.447874  766058 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:22:53.447895  766058 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:22:53.447934  766058 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1226 22:22:53.447939  766058 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 22:22:53.448124  766058 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 22:22:53.448131  766058 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1226 22:22:53.448213  766058 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004238 seconds
	I1226 22:22:53.448229  766058 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.004238 seconds
	I1226 22:22:53.448337  766058 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 22:22:53.448346  766058 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1226 22:22:53.448466  766058 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 22:22:53.448475  766058 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1226 22:22:53.448596  766058 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1226 22:22:53.448607  766058 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1226 22:22:53.448777  766058 kubeadm.go:322] [mark-control-plane] Marking the node multinode-772557 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 22:22:53.448787  766058 command_runner.go:130] > [mark-control-plane] Marking the node multinode-772557 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1226 22:22:53.448839  766058 kubeadm.go:322] [bootstrap-token] Using token: rjdao4.4asc6t9973ya8suh
	I1226 22:22:53.452551  766058 out.go:204]   - Configuring RBAC rules ...
	I1226 22:22:53.448955  766058 command_runner.go:130] > [bootstrap-token] Using token: rjdao4.4asc6t9973ya8suh
	I1226 22:22:53.452669  766058 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 22:22:53.452687  766058 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1226 22:22:53.452772  766058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 22:22:53.452781  766058 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1226 22:22:53.452912  766058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 22:22:53.452920  766058 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1226 22:22:53.453039  766058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 22:22:53.453047  766058 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1226 22:22:53.453170  766058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 22:22:53.453179  766058 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1226 22:22:53.453268  766058 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 22:22:53.453276  766058 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1226 22:22:53.453382  766058 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 22:22:53.453391  766058 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1226 22:22:53.453433  766058 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1226 22:22:53.453442  766058 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1226 22:22:53.453485  766058 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1226 22:22:53.453493  766058 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1226 22:22:53.453503  766058 kubeadm.go:322] 
	I1226 22:22:53.453559  766058 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1226 22:22:53.453567  766058 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1226 22:22:53.453571  766058 kubeadm.go:322] 
	I1226 22:22:53.453643  766058 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1226 22:22:53.453651  766058 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1226 22:22:53.453656  766058 kubeadm.go:322] 
	I1226 22:22:53.453681  766058 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1226 22:22:53.453691  766058 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1226 22:22:53.453747  766058 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 22:22:53.453756  766058 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1226 22:22:53.453803  766058 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 22:22:53.453811  766058 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1226 22:22:53.453815  766058 kubeadm.go:322] 
	I1226 22:22:53.453866  766058 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1226 22:22:53.453874  766058 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1226 22:22:53.453879  766058 kubeadm.go:322] 
	I1226 22:22:53.453924  766058 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 22:22:53.453932  766058 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1226 22:22:53.453936  766058 kubeadm.go:322] 
	I1226 22:22:53.453985  766058 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1226 22:22:53.453994  766058 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1226 22:22:53.454064  766058 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 22:22:53.454072  766058 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1226 22:22:53.454136  766058 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 22:22:53.454144  766058 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1226 22:22:53.454151  766058 kubeadm.go:322] 
	I1226 22:22:53.454230  766058 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1226 22:22:53.454239  766058 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1226 22:22:53.454311  766058 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1226 22:22:53.454320  766058 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1226 22:22:53.454324  766058 kubeadm.go:322] 
	I1226 22:22:53.454403  766058 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rjdao4.4asc6t9973ya8suh \
	I1226 22:22:53.454411  766058 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token rjdao4.4asc6t9973ya8suh \
	I1226 22:22:53.454508  766058 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 \
	I1226 22:22:53.454516  766058 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 \
	I1226 22:22:53.454536  766058 kubeadm.go:322] 	--control-plane 
	I1226 22:22:53.454544  766058 command_runner.go:130] > 	--control-plane 
	I1226 22:22:53.454548  766058 kubeadm.go:322] 
	I1226 22:22:53.454628  766058 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1226 22:22:53.454636  766058 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1226 22:22:53.454641  766058 kubeadm.go:322] 
	I1226 22:22:53.454717  766058 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rjdao4.4asc6t9973ya8suh \
	I1226 22:22:53.454726  766058 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rjdao4.4asc6t9973ya8suh \
	I1226 22:22:53.454823  766058 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 
	I1226 22:22:53.454832  766058 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 
	I1226 22:22:53.454851  766058 cni.go:84] Creating CNI manager for ""
	I1226 22:22:53.454861  766058 cni.go:136] 1 nodes found, recommending kindnet
	I1226 22:22:53.457021  766058 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1226 22:22:53.458594  766058 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 22:22:53.478790  766058 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 22:22:53.478828  766058 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I1226 22:22:53.478836  766058 command_runner.go:130] > Device: 36h/54d	Inode: 1306506     Links: 1
	I1226 22:22:53.478844  766058 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:22:53.478851  766058 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I1226 22:22:53.478857  766058 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I1226 22:22:53.478866  766058 command_runner.go:130] > Change: 2023-12-26 21:45:19.091346626 +0000
	I1226 22:22:53.478873  766058 command_runner.go:130] >  Birth: 2023-12-26 21:45:19.047347634 +0000
	I1226 22:22:53.479399  766058 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 22:22:53.479420  766058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 22:22:53.522810  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 22:22:54.397435  766058 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1226 22:22:54.406510  766058 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1226 22:22:54.415642  766058 command_runner.go:130] > serviceaccount/kindnet created
	I1226 22:22:54.427618  766058 command_runner.go:130] > daemonset.apps/kindnet created
	I1226 22:22:54.433256  766058 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1226 22:22:54.433388  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:54.433490  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=multinode-772557 minikube.k8s.io/updated_at=2023_12_26T22_22_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:54.572305  766058 command_runner.go:130] > node/multinode-772557 labeled
	I1226 22:22:54.575856  766058 command_runner.go:130] > -16
	I1226 22:22:54.575892  766058 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1226 22:22:54.575916  766058 ops.go:34] apiserver oom_adj: -16
	I1226 22:22:54.575992  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:54.687427  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:55.077101  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:55.177880  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:55.576143  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:55.664574  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:56.077117  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:56.164924  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:56.577037  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:56.669814  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:57.076535  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:57.169568  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:57.576177  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:57.685941  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:58.076563  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:58.173454  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:58.576874  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:58.681127  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:59.076495  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:59.166821  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:22:59.576186  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:22:59.674708  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:00.076253  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:00.321066  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:00.576669  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:00.666174  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:01.076332  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:01.169020  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:01.576682  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:01.662800  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:02.076068  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:02.177778  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:02.576409  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:02.670254  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:03.076728  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:03.173655  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:03.576354  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:03.688940  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:04.076962  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:04.176074  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:04.576655  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:04.672422  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:05.077123  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:05.179211  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:05.576757  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:05.673858  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:06.076379  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:06.191879  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:06.576695  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:06.683515  766058 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1226 22:23:07.076864  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:07.239351  766058 command_runner.go:130] > NAME      SECRETS   AGE
	I1226 22:23:07.239375  766058 command_runner.go:130] > default   0         1s
	I1226 22:23:07.242814  766058 kubeadm.go:1088] duration metric: took 12.809468047s to wait for elevateKubeSystemPrivileges.
	I1226 22:23:07.242841  766058 kubeadm.go:406] StartCluster complete in 30.482529853s
	I1226 22:23:07.242872  766058 settings.go:142] acquiring lock: {Name:mk1b89d623875ac96830001bdd0fc2b8d8c10aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:23:07.242934  766058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:23:07.243671  766058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17857-697646/kubeconfig: {Name:mk171fc32e21f516abb68bc5ebeb628b3c1d7f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:23:07.244174  766058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:23:07.244758  766058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1226 22:23:07.245015  766058 config.go:182] Loaded profile config "multinode-772557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:23:07.245161  766058 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I1226 22:23:07.245255  766058 addons.go:69] Setting storage-provisioner=true in profile "multinode-772557"
	I1226 22:23:07.245271  766058 addons.go:237] Setting addon storage-provisioner=true in "multinode-772557"
	I1226 22:23:07.245310  766058 host.go:66] Checking if "multinode-772557" exists ...
	I1226 22:23:07.245789  766058 cli_runner.go:164] Run: docker container inspect multinode-772557 --format={{.State.Status}}
	I1226 22:23:07.244464  766058 kapi.go:59] client config for multinode-772557: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:23:07.246792  766058 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:23:07.246813  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:07.246822  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:07.246830  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:07.247078  766058 cert_rotation.go:137] Starting client certificate rotation controller
	I1226 22:23:07.247511  766058 addons.go:69] Setting default-storageclass=true in profile "multinode-772557"
	I1226 22:23:07.247536  766058 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-772557"
	I1226 22:23:07.247850  766058 cli_runner.go:164] Run: docker container inspect multinode-772557 --format={{.State.Status}}
	I1226 22:23:07.299367  766058 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1226 22:23:07.301319  766058 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:23:07.301348  766058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1226 22:23:07.301450  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:23:07.304445  766058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:23:07.304752  766058 kapi.go:59] client config for multinode-772557: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:23:07.305055  766058 addons.go:237] Setting addon default-storageclass=true in "multinode-772557"
	I1226 22:23:07.305093  766058 host.go:66] Checking if "multinode-772557" exists ...
	I1226 22:23:07.305592  766058 cli_runner.go:164] Run: docker container inspect multinode-772557 --format={{.State.Status}}
	I1226 22:23:07.311688  766058 round_trippers.go:574] Response Status: 200 OK in 64 milliseconds
	I1226 22:23:07.311711  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:07.311720  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:07.311734  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:07.311744  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:07.311750  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:07.311756  766058 round_trippers.go:580]     Content-Length: 291
	I1226 22:23:07.311768  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:07 GMT
	I1226 22:23:07.311775  766058 round_trippers.go:580]     Audit-Id: 6f54709f-6866-4ab5-9948-decd876f8a23
	I1226 22:23:07.311805  766058 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7703adf0-ff18-499b-9077-c17b95400379","resourceVersion":"382","creationTimestamp":"2023-12-26T22:22:53Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1226 22:23:07.312246  766058 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7703adf0-ff18-499b-9077-c17b95400379","resourceVersion":"382","creationTimestamp":"2023-12-26T22:22:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1226 22:23:07.312313  766058 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:23:07.312319  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:07.312327  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:07.312334  766058 round_trippers.go:473]     Content-Type: application/json
	I1226 22:23:07.312340  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:07.369813  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33746 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa Username:docker}
	I1226 22:23:07.373535  766058 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I1226 22:23:07.373560  766058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1226 22:23:07.373625  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:23:07.408146  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33746 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa Username:docker}
	I1226 22:23:07.487358  766058 round_trippers.go:574] Response Status: 200 OK in 174 milliseconds
	I1226 22:23:07.487445  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:07.487474  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:07.487522  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:07.487556  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:07.487590  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:07.487618  766058 round_trippers.go:580]     Content-Length: 291
	I1226 22:23:07.487676  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:07 GMT
	I1226 22:23:07.487696  766058 round_trippers.go:580]     Audit-Id: 99392ca2-28a6-4bc4-800f-5ceda4505958
	I1226 22:23:07.487760  766058 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7703adf0-ff18-499b-9077-c17b95400379","resourceVersion":"388","creationTimestamp":"2023-12-26T22:22:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1226 22:23:07.516499  766058 command_runner.go:130] > apiVersion: v1
	I1226 22:23:07.516604  766058 command_runner.go:130] > data:
	I1226 22:23:07.516634  766058 command_runner.go:130] >   Corefile: |
	I1226 22:23:07.516660  766058 command_runner.go:130] >     .:53 {
	I1226 22:23:07.516680  766058 command_runner.go:130] >         errors
	I1226 22:23:07.516711  766058 command_runner.go:130] >         health {
	I1226 22:23:07.516729  766058 command_runner.go:130] >            lameduck 5s
	I1226 22:23:07.516754  766058 command_runner.go:130] >         }
	I1226 22:23:07.516785  766058 command_runner.go:130] >         ready
	I1226 22:23:07.516808  766058 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1226 22:23:07.516834  766058 command_runner.go:130] >            pods insecure
	I1226 22:23:07.516863  766058 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1226 22:23:07.516885  766058 command_runner.go:130] >            ttl 30
	I1226 22:23:07.516904  766058 command_runner.go:130] >         }
	I1226 22:23:07.516940  766058 command_runner.go:130] >         prometheus :9153
	I1226 22:23:07.516960  766058 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1226 22:23:07.516980  766058 command_runner.go:130] >            max_concurrent 1000
	I1226 22:23:07.517015  766058 command_runner.go:130] >         }
	I1226 22:23:07.517034  766058 command_runner.go:130] >         cache 30
	I1226 22:23:07.517055  766058 command_runner.go:130] >         loop
	I1226 22:23:07.517074  766058 command_runner.go:130] >         reload
	I1226 22:23:07.517103  766058 command_runner.go:130] >         loadbalance
	I1226 22:23:07.517122  766058 command_runner.go:130] >     }
	I1226 22:23:07.517143  766058 command_runner.go:130] > kind: ConfigMap
	I1226 22:23:07.517180  766058 command_runner.go:130] > metadata:
	I1226 22:23:07.517201  766058 command_runner.go:130] >   creationTimestamp: "2023-12-26T22:22:53Z"
	I1226 22:23:07.517223  766058 command_runner.go:130] >   name: coredns
	I1226 22:23:07.517254  766058 command_runner.go:130] >   namespace: kube-system
	I1226 22:23:07.517272  766058 command_runner.go:130] >   resourceVersion: "254"
	I1226 22:23:07.517292  766058 command_runner.go:130] >   uid: b6eb8822-881e-4f53-80e5-470c1b1d3126
	I1226 22:23:07.518667  766058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1226 22:23:07.583119  766058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1226 22:23:07.589422  766058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1226 22:23:07.747883  766058 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:23:07.747979  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:07.748015  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:07.748036  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:07.891624  766058 round_trippers.go:574] Response Status: 200 OK in 143 milliseconds
	I1226 22:23:07.891721  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:07.891749  766058 round_trippers.go:580]     Audit-Id: b212fd12-4019-453b-981d-61c5848d24c7
	I1226 22:23:07.891785  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:07.891824  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:07.891845  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:07.891891  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:07.891922  766058 round_trippers.go:580]     Content-Length: 291
	I1226 22:23:07.891961  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:07 GMT
	I1226 22:23:07.917535  766058 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7703adf0-ff18-499b-9077-c17b95400379","resourceVersion":"391","creationTimestamp":"2023-12-26T22:22:53Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1226 22:23:07.917770  766058 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-772557" context rescaled to 1 replicas
	I1226 22:23:07.917826  766058 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1226 22:23:07.920255  766058 out.go:177] * Verifying Kubernetes components...
	I1226 22:23:07.922604  766058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:23:08.234362  766058 command_runner.go:130] > configmap/coredns replaced
	I1226 22:23:08.235946  766058 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1226 22:23:08.378648  766058 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1226 22:23:08.385209  766058 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1226 22:23:08.395510  766058 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1226 22:23:08.404885  766058 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1226 22:23:08.415368  766058 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1226 22:23:08.428272  766058 command_runner.go:130] > pod/storage-provisioner created
	I1226 22:23:08.432865  766058 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1226 22:23:08.432982  766058 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1226 22:23:08.432993  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:08.433002  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:08.433010  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:08.433417  766058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:23:08.433671  766058 kapi.go:59] client config for multinode-772557: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:23:08.433949  766058 node_ready.go:35] waiting up to 6m0s for node "multinode-772557" to be "Ready" ...
	I1226 22:23:08.434027  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:08.434033  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:08.434041  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:08.434049  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:08.442595  766058 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1226 22:23:08.442668  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:08.442692  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:08.442717  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:08.442757  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:08.442777  766058 round_trippers.go:580]     Content-Length: 1273
	I1226 22:23:08.442815  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:08 GMT
	I1226 22:23:08.442838  766058 round_trippers.go:580]     Audit-Id: 90e83ac3-5028-489b-98da-f99a24fb7aee
	I1226 22:23:08.442858  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:08.444794  766058 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"9144d639-a6b1-497d-a90c-e6cec27f9529","resourceVersion":"402","creationTimestamp":"2023-12-26T22:23:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-26T22:23:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1226 22:23:08.445201  766058 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9144d639-a6b1-497d-a90c-e6cec27f9529","resourceVersion":"402","creationTimestamp":"2023-12-26T22:23:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-26T22:23:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1226 22:23:08.445248  766058 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1226 22:23:08.445254  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:08.445262  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:08.445270  766058 round_trippers.go:473]     Content-Type: application/json
	I1226 22:23:08.445277  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:08.446859  766058 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1226 22:23:08.446875  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:08.446883  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:08.446890  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:08 GMT
	I1226 22:23:08.446897  766058 round_trippers.go:580]     Audit-Id: 7bc05c10-3e17-43f9-aad0-0b5252b58f3a
	I1226 22:23:08.446903  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:08.446909  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:08.446915  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:08.449205  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:08.452924  766058 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:23:08.452991  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:08.453015  766058 round_trippers.go:580]     Audit-Id: d73fa093-4951-406f-9aa5-840ff795da9f
	I1226 22:23:08.453040  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:08.453074  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:08.453099  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:08.453120  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:08.453154  766058 round_trippers.go:580]     Content-Length: 1220
	I1226 22:23:08.453176  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:08 GMT
	I1226 22:23:08.453425  766058 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9144d639-a6b1-497d-a90c-e6cec27f9529","resourceVersion":"402","creationTimestamp":"2023-12-26T22:23:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-26T22:23:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1226 22:23:08.459020  766058 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1226 22:23:08.460728  766058 addons.go:508] enable addons completed in 1.215572865s: enabled=[storage-provisioner default-storageclass]
	I1226 22:23:08.934230  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:08.934255  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:08.934265  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:08.934274  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:08.936953  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:08.937024  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:08.937046  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:08.937062  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:08 GMT
	I1226 22:23:08.937069  766058 round_trippers.go:580]     Audit-Id: 21a38b0d-a5e2-4d4a-a955-dc44f3cc9da8
	I1226 22:23:08.937089  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:08.937097  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:08.937104  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:08.937246  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:09.434425  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:09.434451  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:09.434461  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:09.434468  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:09.437237  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:09.437261  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:09.437270  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:09 GMT
	I1226 22:23:09.437277  766058 round_trippers.go:580]     Audit-Id: b2059a1b-b11b-45e9-9d80-04243c563b3c
	I1226 22:23:09.437286  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:09.437293  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:09.437299  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:09.437310  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:09.437799  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:09.934152  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:09.934182  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:09.934194  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:09.934201  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:09.936780  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:09.936859  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:09.936896  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:09 GMT
	I1226 22:23:09.936975  766058 round_trippers.go:580]     Audit-Id: 565976ee-b86c-4649-9beb-d31130d23332
	I1226 22:23:09.936992  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:09.936999  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:09.937006  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:09.937012  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:09.937157  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:10.434602  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:10.434628  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:10.434640  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:10.434647  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:10.437137  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:10.437164  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:10.437174  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:10.437181  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:10 GMT
	I1226 22:23:10.437187  766058 round_trippers.go:580]     Audit-Id: 30f90f29-0ebe-453f-b4c1-7d8fa06f6c0b
	I1226 22:23:10.437200  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:10.437215  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:10.437221  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:10.437565  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:10.437978  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:10.934753  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:10.934777  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:10.934787  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:10.934794  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:10.937326  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:10.937385  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:10.937399  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:10.937407  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:10 GMT
	I1226 22:23:10.937422  766058 round_trippers.go:580]     Audit-Id: 79a2f50f-1c43-4d8f-aa53-455f137efd58
	I1226 22:23:10.937429  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:10.937439  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:10.937448  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:10.937588  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:11.435010  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:11.435038  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:11.435049  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:11.435057  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:11.437840  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:11.437864  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:11.437873  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:11.437879  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:11.437885  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:11 GMT
	I1226 22:23:11.437891  766058 round_trippers.go:580]     Audit-Id: 404d1c36-e05f-4358-ace0-07f4a9bc7495
	I1226 22:23:11.437898  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:11.437905  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:11.438027  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:11.934114  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:11.934137  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:11.934147  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:11.934155  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:11.936655  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:11.936681  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:11.936690  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:11.936696  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:11 GMT
	I1226 22:23:11.936702  766058 round_trippers.go:580]     Audit-Id: 072418fa-983d-42a6-96c6-1c07457eb87a
	I1226 22:23:11.936708  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:11.936720  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:11.936730  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:11.937150  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:12.434172  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:12.434195  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:12.434206  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:12.434213  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:12.436816  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:12.436840  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:12.436849  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:12 GMT
	I1226 22:23:12.436856  766058 round_trippers.go:580]     Audit-Id: 41d7b8b6-e812-494b-8ad7-3704276ada63
	I1226 22:23:12.436861  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:12.436868  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:12.436874  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:12.436885  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:12.437005  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:12.934455  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:12.934479  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:12.934490  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:12.934497  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:12.937229  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:12.937256  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:12.937265  766058 round_trippers.go:580]     Audit-Id: 1cebd5b7-3748-4c3e-8971-7b22b749156c
	I1226 22:23:12.937272  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:12.937279  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:12.937285  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:12.937291  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:12.937298  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:12 GMT
	I1226 22:23:12.937431  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:12.937832  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:13.434227  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:13.434259  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:13.434269  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:13.434276  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:13.436946  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:13.436968  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:13.436981  766058 round_trippers.go:580]     Audit-Id: 37a9b7fb-983b-4885-b95d-ce8bc6bc26ec
	I1226 22:23:13.436988  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:13.436995  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:13.437001  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:13.437013  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:13.437023  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:13 GMT
	I1226 22:23:13.437155  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:13.934293  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:13.934315  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:13.934325  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:13.934332  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:13.936990  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:13.937019  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:13.937028  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:13.937035  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:13.937041  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:13 GMT
	I1226 22:23:13.937048  766058 round_trippers.go:580]     Audit-Id: e9898266-d81b-49f9-93ad-a9091f71b7e0
	I1226 22:23:13.937054  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:13.937063  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:13.937198  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:14.434221  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:14.434249  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:14.434259  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:14.434269  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:14.436711  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:14.436731  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:14.436739  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:14.436745  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:14.436752  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:14.436764  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:14.436771  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:14 GMT
	I1226 22:23:14.436777  766058 round_trippers.go:580]     Audit-Id: 6f10303c-15bb-4f93-ad87-55e5421fb701
	I1226 22:23:14.437223  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:14.934906  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:14.934930  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:14.934940  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:14.934947  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:14.937383  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:14.937407  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:14.937415  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:14.937421  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:14.937428  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:14 GMT
	I1226 22:23:14.937434  766058 round_trippers.go:580]     Audit-Id: 2fd63c2c-5096-4cdc-97fd-59b214140b45
	I1226 22:23:14.937448  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:14.937455  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:14.937659  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:14.938103  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:15.434900  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:15.434924  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:15.434934  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:15.434942  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:15.437451  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:15.437477  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:15.437486  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:15.437493  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:15 GMT
	I1226 22:23:15.437499  766058 round_trippers.go:580]     Audit-Id: c1452d4e-5a4c-4b6c-a2b7-77cdae3a8cbd
	I1226 22:23:15.437505  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:15.437512  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:15.437522  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:15.437927  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:15.934206  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:15.934234  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:15.934244  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:15.934251  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:15.936877  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:15.936898  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:15.936907  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:15.936913  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:15.936920  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:15.936926  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:15 GMT
	I1226 22:23:15.936933  766058 round_trippers.go:580]     Audit-Id: a8f5bbb2-ef6d-43bb-91e3-6494ba12b8a0
	I1226 22:23:15.936939  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:15.937297  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:16.434333  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:16.434358  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:16.434369  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:16.434376  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:16.437012  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:16.437038  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:16.437047  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:16.437054  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:16.437061  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:16 GMT
	I1226 22:23:16.437068  766058 round_trippers.go:580]     Audit-Id: 4cf5fc0a-cc50-4cd7-8901-839f84066ddb
	I1226 22:23:16.437075  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:16.437087  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:16.437329  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:16.934996  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:16.935020  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:16.935030  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:16.935038  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:16.938620  766058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:23:16.938646  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:16.938655  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:16.938662  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:16.938668  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:16.938674  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:16 GMT
	I1226 22:23:16.938680  766058 round_trippers.go:580]     Audit-Id: a1f5f235-3ae3-4959-8074-9cdd42b607d4
	I1226 22:23:16.938686  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:16.938836  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:16.939252  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:17.434235  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:17.434263  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:17.434273  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:17.434280  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:17.436856  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:17.436876  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:17.436884  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:17.436890  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:17.436897  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:17.436903  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:17 GMT
	I1226 22:23:17.436910  766058 round_trippers.go:580]     Audit-Id: be949873-48fe-4870-b789-0b7f88a41df6
	I1226 22:23:17.436916  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:17.437066  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:17.934384  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:17.934428  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:17.934439  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:17.934446  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:17.936908  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:17.936930  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:17.936938  766058 round_trippers.go:580]     Audit-Id: db0fd15b-6bfd-48bd-9b9c-497eec57abb4
	I1226 22:23:17.936945  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:17.936952  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:17.936959  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:17.936966  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:17.936972  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:17 GMT
	I1226 22:23:17.937168  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:18.434676  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:18.434700  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:18.434710  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:18.434717  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:18.437382  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:18.437403  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:18.437411  766058 round_trippers.go:580]     Audit-Id: 21990dd9-9cd4-46a9-9004-ca996ba4a957
	I1226 22:23:18.437418  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:18.437424  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:18.437430  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:18.437436  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:18.437442  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:18 GMT
	I1226 22:23:18.437606  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:18.935006  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:18.935031  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:18.935042  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:18.935050  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:18.937514  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:18.937535  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:18.937543  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:18.937549  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:18 GMT
	I1226 22:23:18.937555  766058 round_trippers.go:580]     Audit-Id: 3df624a8-8cd2-42c9-ac39-4af9acbeee0c
	I1226 22:23:18.937562  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:18.937568  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:18.937575  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:18.937706  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:19.434610  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:19.434638  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:19.434648  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:19.434656  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:19.437099  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:19.437133  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:19.437143  766058 round_trippers.go:580]     Audit-Id: 63f88654-21a2-4c51-ac8f-bd806ac8d04c
	I1226 22:23:19.437150  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:19.437156  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:19.437163  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:19.437169  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:19.437183  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:19 GMT
	I1226 22:23:19.437552  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:19.437958  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:19.935012  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:19.935036  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:19.935046  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:19.935054  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:19.937584  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:19.937610  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:19.937619  766058 round_trippers.go:580]     Audit-Id: 2bf65e0c-4b19-41a8-8a48-24f45f0ecf9f
	I1226 22:23:19.937625  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:19.937631  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:19.937638  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:19.937645  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:19.937651  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:19 GMT
	I1226 22:23:19.938194  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:20.434183  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:20.434207  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:20.434217  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:20.434225  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:20.436641  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:20.436662  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:20.436670  766058 round_trippers.go:580]     Audit-Id: cab2cf63-0c5c-467d-89ea-29e930961d40
	I1226 22:23:20.436677  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:20.436683  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:20.436689  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:20.436695  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:20.436701  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:20 GMT
	I1226 22:23:20.436871  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:20.934980  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:20.935004  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:20.935015  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:20.935022  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:20.937498  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:20.937519  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:20.937528  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:20.937535  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:20.937541  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:20 GMT
	I1226 22:23:20.937547  766058 round_trippers.go:580]     Audit-Id: 8d5a5e06-d92a-4e2e-834f-0c34be4cddaf
	I1226 22:23:20.937553  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:20.937560  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:20.937696  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:21.434848  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:21.434874  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:21.434885  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:21.434892  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:21.437452  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:21.437479  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:21.437489  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:21 GMT
	I1226 22:23:21.437495  766058 round_trippers.go:580]     Audit-Id: 2fa14bc7-94ba-452b-a078-4d2946f242aa
	I1226 22:23:21.437502  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:21.437508  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:21.437514  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:21.437520  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:21.437646  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:21.438038  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:21.934811  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:21.934837  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:21.934847  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:21.934854  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:21.937439  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:21.937462  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:21.937477  766058 round_trippers.go:580]     Audit-Id: 87c3b518-148f-4293-9765-4db889bc0b9d
	I1226 22:23:21.937485  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:21.937491  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:21.937498  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:21.937508  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:21.937518  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:21 GMT
	I1226 22:23:21.937798  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:22.434232  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:22.434262  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:22.434273  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:22.434283  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:22.437001  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:22.437026  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:22.437035  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:22.437041  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:22 GMT
	I1226 22:23:22.437048  766058 round_trippers.go:580]     Audit-Id: 50f18029-1b25-421f-88a7-d072dae22823
	I1226 22:23:22.437054  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:22.437060  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:22.437066  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:22.437151  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:22.934195  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:22.934218  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:22.934228  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:22.934235  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:22.936659  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:22.936680  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:22.936689  766058 round_trippers.go:580]     Audit-Id: 9e4469ca-bdd7-4bfa-a3c7-1e748a031298
	I1226 22:23:22.936695  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:22.936701  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:22.936708  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:22.936714  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:22.936722  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:22 GMT
	I1226 22:23:22.936869  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:23.434117  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:23.434143  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:23.434153  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:23.434161  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:23.436676  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:23.436705  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:23.436714  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:23.436721  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:23 GMT
	I1226 22:23:23.436727  766058 round_trippers.go:580]     Audit-Id: 3a9aefc1-80a3-4138-8408-df9f2180209a
	I1226 22:23:23.436734  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:23.436740  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:23.436750  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:23.436857  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:23.934987  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:23.935021  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:23.935031  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:23.935038  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:23.937656  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:23.937677  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:23.937685  766058 round_trippers.go:580]     Audit-Id: a3c185ed-e84c-45ff-ae22-84e92df46eda
	I1226 22:23:23.937691  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:23.937698  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:23.937704  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:23.937710  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:23.937719  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:23 GMT
	I1226 22:23:23.937905  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:23.938294  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:24.435114  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:24.435141  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:24.435154  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:24.435161  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:24.437713  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:24.437740  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:24.437749  766058 round_trippers.go:580]     Audit-Id: 1c732c54-6d93-4de2-9ff0-65aad857949c
	I1226 22:23:24.437756  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:24.437762  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:24.437768  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:24.437774  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:24.437781  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:24 GMT
	I1226 22:23:24.437874  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:24.935020  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:24.935072  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:24.935082  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:24.935089  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:24.937559  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:24.937587  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:24.937596  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:24.937603  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:24 GMT
	I1226 22:23:24.937610  766058 round_trippers.go:580]     Audit-Id: e10c5f43-905d-496a-9fc1-da6c7acbf374
	I1226 22:23:24.937616  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:24.937623  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:24.937642  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:24.937762  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:25.434913  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:25.434937  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:25.434947  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:25.434954  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:25.437466  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:25.437490  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:25.437499  766058 round_trippers.go:580]     Audit-Id: f2d054c1-69b7-4060-b317-dfbea26c7b3a
	I1226 22:23:25.437505  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:25.437511  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:25.437518  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:25.437525  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:25.437531  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:25 GMT
	I1226 22:23:25.437645  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:25.934826  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:25.934849  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:25.934859  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:25.934866  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:25.937263  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:25.937290  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:25.937299  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:25 GMT
	I1226 22:23:25.937306  766058 round_trippers.go:580]     Audit-Id: 03c59062-b73e-436f-89a6-4ad48c063a9f
	I1226 22:23:25.937312  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:25.937319  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:25.937331  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:25.937337  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:25.937495  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:26.434605  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:26.434631  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:26.434641  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:26.434648  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:26.437249  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:26.437276  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:26.437285  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:26.437291  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:26.437297  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:26.437304  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:26.437311  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:26 GMT
	I1226 22:23:26.437322  766058 round_trippers.go:580]     Audit-Id: 35abe401-1546-47a9-b896-4444fb927b08
	I1226 22:23:26.437509  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:26.437915  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:26.934588  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:26.934612  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:26.934623  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:26.934631  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:26.937215  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:26.937237  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:26.937245  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:26.937252  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:26.937258  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:26.937265  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:26.937271  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:26 GMT
	I1226 22:23:26.937278  766058 round_trippers.go:580]     Audit-Id: 067039fa-c955-4b3d-bbee-16e87adfdff1
	I1226 22:23:26.937457  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:27.434697  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:27.434723  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:27.434733  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:27.434740  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:27.437230  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:27.437258  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:27.437266  766058 round_trippers.go:580]     Audit-Id: 0b7426ba-07b6-4d32-82af-ef7c131e9e56
	I1226 22:23:27.437273  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:27.437279  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:27.437285  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:27.437293  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:27.437300  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:27 GMT
	I1226 22:23:27.437406  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:27.934189  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:27.934214  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:27.934224  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:27.934232  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:27.936631  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:27.936654  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:27.936662  766058 round_trippers.go:580]     Audit-Id: c95df71f-1778-4976-b722-4b02e914ec1c
	I1226 22:23:27.936668  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:27.936676  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:27.936682  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:27.936689  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:27.936695  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:27 GMT
	I1226 22:23:27.936847  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:28.435002  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:28.435028  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:28.435039  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:28.435046  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:28.437714  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:28.437738  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:28.437747  766058 round_trippers.go:580]     Audit-Id: 14cd2563-2536-4e9a-b91c-2f469bbee8c9
	I1226 22:23:28.437756  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:28.437762  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:28.437769  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:28.437776  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:28.437782  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:28 GMT
	I1226 22:23:28.437892  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:28.438289  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:28.934967  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:28.934994  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:28.935003  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:28.935011  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:28.937519  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:28.937544  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:28.937552  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:28 GMT
	I1226 22:23:28.937558  766058 round_trippers.go:580]     Audit-Id: 5bc66827-7564-4117-ada7-0794d9f2e8d4
	I1226 22:23:28.937565  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:28.937571  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:28.937577  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:28.937584  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:28.937981  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:29.434185  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:29.434211  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:29.434222  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:29.434229  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:29.436774  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:29.436795  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:29.436803  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:29.436809  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:29.436816  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:29.436822  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:29 GMT
	I1226 22:23:29.436828  766058 round_trippers.go:580]     Audit-Id: 0f9b9097-78b4-4199-9df1-0e7900178c07
	I1226 22:23:29.436834  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:29.436994  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:29.934999  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:29.935024  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:29.935035  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:29.935043  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:29.937761  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:29.937799  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:29.937812  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:29.937819  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:29.937825  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:29.937831  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:29.937841  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:29 GMT
	I1226 22:23:29.937847  766058 round_trippers.go:580]     Audit-Id: 3c6799f9-fd0c-4cf3-a9fb-b9b61c2152d6
	I1226 22:23:29.938098  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:30.434788  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:30.434814  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:30.434824  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:30.434832  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:30.437421  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:30.437442  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:30.437451  766058 round_trippers.go:580]     Audit-Id: 2255dac7-f00e-4fb1-81bc-0cf4544eccdd
	I1226 22:23:30.437457  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:30.437463  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:30.437470  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:30.437476  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:30.437483  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:30 GMT
	I1226 22:23:30.437592  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:30.934733  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:30.934755  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:30.934765  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:30.934772  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:30.937199  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:30.937221  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:30.937229  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:30.937236  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:30.937243  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:30.937249  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:30.937256  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:30 GMT
	I1226 22:23:30.937262  766058 round_trippers.go:580]     Audit-Id: 409cdd0d-7150-47bd-a728-52d942d0f5df
	I1226 22:23:30.937399  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:30.937781  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:31.434197  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:31.434222  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:31.434232  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:31.434239  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:31.437392  766058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:23:31.437422  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:31.437431  766058 round_trippers.go:580]     Audit-Id: 91556a43-72c0-4578-a27d-7a54555f87da
	I1226 22:23:31.437438  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:31.437445  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:31.437451  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:31.437458  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:31.437464  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:31 GMT
	I1226 22:23:31.437888  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:31.934229  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:31.934262  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:31.934275  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:31.934282  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:31.937217  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:31.937242  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:31.937251  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:31.937261  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:31.937267  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:31.937273  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:31 GMT
	I1226 22:23:31.937280  766058 round_trippers.go:580]     Audit-Id: b0616c79-4659-4be4-af04-85b816b5361c
	I1226 22:23:31.937287  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:31.937412  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:32.434853  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:32.434882  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:32.434892  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:32.434899  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:32.437370  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:32.437395  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:32.437403  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:32.437410  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:32.437417  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:32.437423  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:32.437430  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:32 GMT
	I1226 22:23:32.437436  766058 round_trippers.go:580]     Audit-Id: 52b7c4a2-76c0-4976-890a-1a2c8b96f045
	I1226 22:23:32.437722  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:32.934312  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:32.934337  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:32.934348  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:32.934355  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:32.936821  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:32.936843  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:32.936852  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:32.936858  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:32.936866  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:32.936872  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:32.936879  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:32 GMT
	I1226 22:23:32.936890  766058 round_trippers.go:580]     Audit-Id: 6e4ac8e5-9592-4087-9cd2-0d7e48299d2f
	I1226 22:23:32.937104  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:33.434809  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:33.434835  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:33.434846  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:33.434853  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:33.437465  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:33.437484  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:33.437492  766058 round_trippers.go:580]     Audit-Id: 02160534-6c30-4a6c-b588-bd114b4c8bc4
	I1226 22:23:33.437499  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:33.437505  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:33.437511  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:33.437517  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:33.437523  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:33 GMT
	I1226 22:23:33.437634  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:33.438011  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:33.934860  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:33.934882  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:33.934896  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:33.934904  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:33.937653  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:33.937680  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:33.937689  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:33.937697  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:33 GMT
	I1226 22:23:33.937703  766058 round_trippers.go:580]     Audit-Id: de6a15f9-3435-4125-b14d-f51bd5663b21
	I1226 22:23:33.937709  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:33.937716  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:33.937723  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:33.937857  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:34.435030  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:34.435050  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:34.435061  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:34.435074  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:34.437749  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:34.437776  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:34.437786  766058 round_trippers.go:580]     Audit-Id: 36279ac6-1edf-44f7-a313-9782db366e90
	I1226 22:23:34.437793  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:34.437799  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:34.437805  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:34.437811  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:34.437818  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:34 GMT
	I1226 22:23:34.438078  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:34.934846  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:34.934876  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:34.934887  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:34.934894  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:34.937587  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:34.937670  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:34.937694  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:34.937717  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:34.937776  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:34.937803  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:34.937824  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:34 GMT
	I1226 22:23:34.937839  766058 round_trippers.go:580]     Audit-Id: ead53387-54f8-4347-a946-93b9f456f075
	I1226 22:23:34.937958  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:35.434581  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:35.434608  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:35.434618  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:35.434626  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:35.437102  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:35.437127  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:35.437136  766058 round_trippers.go:580]     Audit-Id: 5262a0ae-bc11-4433-a76b-b4d028b0750f
	I1226 22:23:35.437142  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:35.437151  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:35.437158  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:35.437164  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:35.437173  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:35 GMT
	I1226 22:23:35.437264  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:35.935041  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:35.935069  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:35.935080  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:35.935088  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:35.937522  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:35.937544  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:35.937553  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:35.937560  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:35.937566  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:35.937573  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:35 GMT
	I1226 22:23:35.937579  766058 round_trippers.go:580]     Audit-Id: 105a4873-3217-4c82-80e7-38e1b93d09e4
	I1226 22:23:35.937585  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:35.937731  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:35.938118  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:36.434934  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:36.434956  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:36.434966  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:36.434973  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:36.437781  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:36.437808  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:36.437817  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:36 GMT
	I1226 22:23:36.437823  766058 round_trippers.go:580]     Audit-Id: b0e1e592-6a48-4661-b379-e64116d3ff0a
	I1226 22:23:36.437830  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:36.437845  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:36.437852  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:36.437863  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:36.437991  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:36.934972  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:36.935014  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:36.935025  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:36.935033  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:36.937580  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:36.937601  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:36.937610  766058 round_trippers.go:580]     Audit-Id: 59e75af2-b8db-4e9f-a896-96deb63713af
	I1226 22:23:36.937617  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:36.937623  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:36.937629  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:36.937635  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:36.937641  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:36 GMT
	I1226 22:23:36.937766  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:37.434941  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:37.434968  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:37.434979  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:37.434987  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:37.437460  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:37.437485  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:37.437494  766058 round_trippers.go:580]     Audit-Id: 750d7b40-c357-461f-b392-cdf029c7cd2e
	I1226 22:23:37.437500  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:37.437508  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:37.437516  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:37.437522  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:37.437532  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:37 GMT
	I1226 22:23:37.437644  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:37.934755  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:37.934781  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:37.934795  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:37.934803  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:37.937396  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:37.937428  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:37.937437  766058 round_trippers.go:580]     Audit-Id: 1ec3c66b-5eba-4b5e-9507-b2c2458c4cdb
	I1226 22:23:37.937469  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:37.937477  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:37.937487  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:37.937493  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:37.937501  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:37 GMT
	I1226 22:23:37.937949  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"361","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1226 22:23:37.938352  766058 node_ready.go:58] node "multinode-772557" has status "Ready":"False"
	I1226 22:23:38.434216  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:38.434240  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:38.434250  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:38.434260  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:38.436812  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:38.436835  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:38.436843  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:38.436851  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:38.436857  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:38 GMT
	I1226 22:23:38.436863  766058 round_trippers.go:580]     Audit-Id: cc8cd6cd-1f51-4a94-a64f-91404d01ea0f
	I1226 22:23:38.436869  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:38.436876  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:38.437028  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:38.437425  766058 node_ready.go:49] node "multinode-772557" has status "Ready":"True"
	I1226 22:23:38.437439  766058 node_ready.go:38] duration metric: took 30.003469154s waiting for node "multinode-772557" to be "Ready" ...
	I1226 22:23:38.437449  766058 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:23:38.437561  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:23:38.437573  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:38.437582  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:38.437588  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:38.441427  766058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:23:38.441450  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:38.441459  766058 round_trippers.go:580]     Audit-Id: 91c3fe74-5d5b-4a56-b42b-c8c5dd2ca35d
	I1226 22:23:38.441466  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:38.441472  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:38.441478  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:38.441485  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:38.441491  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:38 GMT
	I1226 22:23:38.441881  766058 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-5dd5756b68-k29sm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"931cdf23-56fe-45a4-afb5-7d30cf6c7d97","resourceVersion":"430","creationTimestamp":"2023-12-26T22:23:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"86d64134-44b9-4f35-8c5d-6492f5e0552e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d64134-44b9-4f35-8c5d-6492f5e0552e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1226 22:23:38.446111  766058 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k29sm" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:38.446230  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k29sm
	I1226 22:23:38.446242  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:38.446253  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:38.446266  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:38.449195  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:38.449219  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:38.449228  766058 round_trippers.go:580]     Audit-Id: 207b9cf5-93f7-4dfd-8b27-c4797209a034
	I1226 22:23:38.449235  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:38.449242  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:38.449249  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:38.449255  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:38.449264  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:38 GMT
	I1226 22:23:38.449521  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k29sm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"931cdf23-56fe-45a4-afb5-7d30cf6c7d97","resourceVersion":"430","creationTimestamp":"2023-12-26T22:23:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"86d64134-44b9-4f35-8c5d-6492f5e0552e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d64134-44b9-4f35-8c5d-6492f5e0552e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:23:38.450077  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:38.450097  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:38.450106  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:38.450114  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:38.452596  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:38.452619  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:38.452628  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:38.452635  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:38.452641  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:38.452648  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:38 GMT
	I1226 22:23:38.452658  766058 round_trippers.go:580]     Audit-Id: 4aeeed0e-7266-411e-8054-3f87a56bf283
	I1226 22:23:38.452664  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:38.452886  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:38.946401  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k29sm
	I1226 22:23:38.946463  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:38.946496  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:38.946517  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:38.955496  766058 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1226 22:23:38.955517  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:38.955526  766058 round_trippers.go:580]     Audit-Id: dc54dafa-b38c-4b71-a9c5-16113e285aa1
	I1226 22:23:38.955532  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:38.955539  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:38.955589  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:38.955600  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:38.955606  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:38 GMT
	I1226 22:23:38.956191  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k29sm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"931cdf23-56fe-45a4-afb5-7d30cf6c7d97","resourceVersion":"430","creationTimestamp":"2023-12-26T22:23:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"86d64134-44b9-4f35-8c5d-6492f5e0552e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d64134-44b9-4f35-8c5d-6492f5e0552e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:23:38.956832  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:38.956866  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:38.956904  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:38.956931  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:38.964319  766058 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:23:38.964388  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:38.964412  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:38.964438  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:38.964472  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:38 GMT
	I1226 22:23:38.964499  766058 round_trippers.go:580]     Audit-Id: f01622eb-3ab5-4afa-944e-ce5e26480235
	I1226 22:23:38.964559  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:38.964584  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:38.964801  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:39.447019  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k29sm
	I1226 22:23:39.447044  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.447054  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.447066  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.449640  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.449703  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.449726  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.449751  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.449786  766058 round_trippers.go:580]     Audit-Id: a53d0f37-669a-46fb-924a-0ca19297c9ea
	I1226 22:23:39.449812  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.449829  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.449836  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.449978  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k29sm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"931cdf23-56fe-45a4-afb5-7d30cf6c7d97","resourceVersion":"430","creationTimestamp":"2023-12-26T22:23:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"86d64134-44b9-4f35-8c5d-6492f5e0552e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d64134-44b9-4f35-8c5d-6492f5e0552e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1226 22:23:39.450497  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:39.450514  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.450523  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.450530  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.452843  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.452860  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.452868  766058 round_trippers.go:580]     Audit-Id: 39165687-19d6-4648-aaa6-d9b738293653
	I1226 22:23:39.452874  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.452880  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.452887  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.452893  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.452899  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.453061  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:39.946600  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k29sm
	I1226 22:23:39.946626  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.946635  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.946643  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.949518  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.949545  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.949554  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.949564  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.949571  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.949577  766058 round_trippers.go:580]     Audit-Id: 00f95593-6845-4b99-ab84-5df6fe3ea1f5
	I1226 22:23:39.949583  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.949590  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.949732  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k29sm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"931cdf23-56fe-45a4-afb5-7d30cf6c7d97","resourceVersion":"442","creationTimestamp":"2023-12-26T22:23:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"86d64134-44b9-4f35-8c5d-6492f5e0552e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d64134-44b9-4f35-8c5d-6492f5e0552e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1226 22:23:39.950313  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:39.950330  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.950339  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.950346  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.952724  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.952750  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.952758  766058 round_trippers.go:580]     Audit-Id: 62d588b0-0681-4134-a04b-820914cec02b
	I1226 22:23:39.952765  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.952771  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.952777  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.952784  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.952800  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.953148  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:39.953538  766058 pod_ready.go:92] pod "coredns-5dd5756b68-k29sm" in "kube-system" namespace has status "Ready":"True"
	I1226 22:23:39.953555  766058 pod_ready.go:81] duration metric: took 1.507411456s waiting for pod "coredns-5dd5756b68-k29sm" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:39.953566  766058 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:39.953635  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-772557
	I1226 22:23:39.953645  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.953652  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.953659  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.956229  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.956302  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.956310  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.956318  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.956325  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.956334  766058 round_trippers.go:580]     Audit-Id: 43c9f8e4-a7d8-4499-ac00-a26af8962493
	I1226 22:23:39.956347  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.956354  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.956470  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-772557","namespace":"kube-system","uid":"f03b0f35-667b-4397-8661-975404c492e6","resourceVersion":"314","creationTimestamp":"2023-12-26T22:22:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"e0cc8d87347d790eb697a7e6691995d5","kubernetes.io/config.mirror":"e0cc8d87347d790eb697a7e6691995d5","kubernetes.io/config.seen":"2023-12-26T22:22:53.416330825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1226 22:23:39.956934  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:39.956954  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.956963  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.956970  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.959416  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.959438  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.959447  766058 round_trippers.go:580]     Audit-Id: 3b3a43a6-733d-4c28-aa99-ef46b3ce5a7d
	I1226 22:23:39.959454  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.959460  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.959466  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.959487  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.959496  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.959943  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:39.960387  766058 pod_ready.go:92] pod "etcd-multinode-772557" in "kube-system" namespace has status "Ready":"True"
	I1226 22:23:39.960406  766058 pod_ready.go:81] duration metric: took 6.825672ms waiting for pod "etcd-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:39.960443  766058 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:39.960508  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-772557
	I1226 22:23:39.960537  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.960552  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.960560  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.963153  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.963176  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.963185  766058 round_trippers.go:580]     Audit-Id: 3299033c-a7ee-42a1-ac37-773b90e61bc2
	I1226 22:23:39.963191  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.963197  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.963204  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.963213  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.963221  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.963586  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-772557","namespace":"kube-system","uid":"afac54c2-df76-44f8-84ea-d9fd949afd91","resourceVersion":"294","creationTimestamp":"2023-12-26T22:22:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"c83f6e10207e0ba7cd7c29439b906882","kubernetes.io/config.mirror":"c83f6e10207e0ba7cd7c29439b906882","kubernetes.io/config.seen":"2023-12-26T22:22:53.416321636Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1226 22:23:39.964149  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:39.964165  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.964174  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.964181  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.966589  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.966615  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.966623  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.966631  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.966638  766058 round_trippers.go:580]     Audit-Id: 64dddc1d-28cc-45f1-97ba-acb2b2a236d6
	I1226 22:23:39.966649  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.966657  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.966664  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.966833  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:39.967258  766058 pod_ready.go:92] pod "kube-apiserver-multinode-772557" in "kube-system" namespace has status "Ready":"True"
	I1226 22:23:39.967274  766058 pod_ready.go:81] duration metric: took 6.823433ms waiting for pod "kube-apiserver-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:39.967287  766058 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:39.967357  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-772557
	I1226 22:23:39.967368  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.967377  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.967384  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.969695  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.969720  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.969729  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.969735  766058 round_trippers.go:580]     Audit-Id: 351d748b-1f17-48fe-86ce-682630ab48d8
	I1226 22:23:39.969741  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.969748  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.969754  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.969762  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.970005  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-772557","namespace":"kube-system","uid":"40cdd4d3-8f44-4eba-8df7-904793fc4571","resourceVersion":"291","creationTimestamp":"2023-12-26T22:22:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2793c76a383a29211965eb883d37c03","kubernetes.io/config.mirror":"a2793c76a383a29211965eb883d37c03","kubernetes.io/config.seen":"2023-12-26T22:22:45.039269387Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1226 22:23:39.970524  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:39.970540  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.970548  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.970555  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.972752  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.972770  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.972777  766058 round_trippers.go:580]     Audit-Id: 7ad4eb46-bb8c-4ed8-af33-f59ea24d8c18
	I1226 22:23:39.972784  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.972790  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.972796  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.972802  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.972808  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.972981  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:39.973383  766058 pod_ready.go:92] pod "kube-controller-manager-multinode-772557" in "kube-system" namespace has status "Ready":"True"
	I1226 22:23:39.973402  766058 pod_ready.go:81] duration metric: took 6.101013ms waiting for pod "kube-controller-manager-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:39.973414  766058 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q2rbf" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:39.973469  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2rbf
	I1226 22:23:39.973479  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:39.973487  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:39.973494  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:39.975887  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:39.975907  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:39.975915  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:39.975921  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:39.975927  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:39.975933  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:39 GMT
	I1226 22:23:39.975940  766058 round_trippers.go:580]     Audit-Id: 9013904f-8d68-4f9b-9f27-491f8324dd86
	I1226 22:23:39.975946  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:39.976154  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q2rbf","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ef274a5-a036-4559-babc-232be6318956","resourceVersion":"400","creationTimestamp":"2023-12-26T22:23:06Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f2bb5d32-e46e-4c09-914a-6e81f727613f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2bb5d32-e46e-4c09-914a-6e81f727613f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1226 22:23:40.034939  766058 request.go:629] Waited for 58.253126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:40.035031  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:40.035037  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:40.035048  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:40.035104  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:40.038015  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:40.038045  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:40.038055  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:40.038062  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:40.038069  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:40.038077  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:40 GMT
	I1226 22:23:40.038086  766058 round_trippers.go:580]     Audit-Id: 3841b491-22a9-4999-b078-acdca7680f31
	I1226 22:23:40.038092  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:40.038273  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:40.038696  766058 pod_ready.go:92] pod "kube-proxy-q2rbf" in "kube-system" namespace has status "Ready":"True"
	I1226 22:23:40.038725  766058 pod_ready.go:81] duration metric: took 65.304131ms waiting for pod "kube-proxy-q2rbf" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:40.038737  766058 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:40.235096  766058 request.go:629] Waited for 196.264509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-772557
	I1226 22:23:40.235177  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-772557
	I1226 22:23:40.235189  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:40.235199  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:40.235207  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:40.237763  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:40.237796  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:40.237806  766058 round_trippers.go:580]     Audit-Id: f1ed6c62-79ee-4ace-8a18-8bcd511d68eb
	I1226 22:23:40.237812  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:40.237819  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:40.237825  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:40.237834  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:40.237841  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:40 GMT
	I1226 22:23:40.237978  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-772557","namespace":"kube-system","uid":"b424c74a-800c-4bd8-b8d3-ac5bb5afe0ba","resourceVersion":"292","creationTimestamp":"2023-12-26T22:22:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3ae379e0edc009083526191a36073f44","kubernetes.io/config.mirror":"3ae379e0edc009083526191a36073f44","kubernetes.io/config.seen":"2023-12-26T22:22:53.416329340Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1226 22:23:40.434762  766058 request.go:629] Waited for 196.343417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:40.434823  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:23:40.434829  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:40.434839  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:40.434850  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:40.437391  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:40.437447  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:40.437457  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:40.437464  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:40.437474  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:40.437481  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:40 GMT
	I1226 22:23:40.437499  766058 round_trippers.go:580]     Audit-Id: 6a1bd803-aeb1-48c6-a34f-0f23812f0dce
	I1226 22:23:40.437506  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:40.437611  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:23:40.438023  766058 pod_ready.go:92] pod "kube-scheduler-multinode-772557" in "kube-system" namespace has status "Ready":"True"
	I1226 22:23:40.438040  766058 pod_ready.go:81] duration metric: took 399.292596ms waiting for pod "kube-scheduler-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:23:40.438051  766058 pod_ready.go:38] duration metric: took 2.000566198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:23:40.438068  766058 api_server.go:52] waiting for apiserver process to appear ...
	I1226 22:23:40.438129  766058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:23:40.449719  766058 command_runner.go:130] > 1275
	I1226 22:23:40.451145  766058 api_server.go:72] duration metric: took 32.533238858s to wait for apiserver process to appear ...
	I1226 22:23:40.451169  766058 api_server.go:88] waiting for apiserver healthz status ...
	I1226 22:23:40.451207  766058 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1226 22:23:40.461365  766058 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1226 22:23:40.461464  766058 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1226 22:23:40.461484  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:40.461498  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:40.461534  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:40.462701  766058 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1226 22:23:40.462721  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:40.462730  766058 round_trippers.go:580]     Content-Length: 264
	I1226 22:23:40.462737  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:40 GMT
	I1226 22:23:40.462743  766058 round_trippers.go:580]     Audit-Id: f276bc64-51f0-456a-b2e7-ac1fb4d19197
	I1226 22:23:40.462750  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:40.462761  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:40.462767  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:40.462776  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:40.462795  766058 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1226 22:23:40.462890  766058 api_server.go:141] control plane version: v1.28.4
	I1226 22:23:40.462910  766058 api_server.go:131] duration metric: took 11.732743ms to wait for apiserver health ...
	I1226 22:23:40.462920  766058 system_pods.go:43] waiting for kube-system pods to appear ...
	I1226 22:23:40.634243  766058 request.go:629] Waited for 171.240961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:23:40.634301  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:23:40.634306  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:40.634315  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:40.634326  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:40.637716  766058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:23:40.637741  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:40.637750  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:40.637756  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:40.637762  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:40.637781  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:40 GMT
	I1226 22:23:40.637789  766058 round_trippers.go:580]     Audit-Id: 0d33ef05-941a-46f9-837a-0d476e025017
	I1226 22:23:40.637796  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:40.638292  766058 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5dd5756b68-k29sm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"931cdf23-56fe-45a4-afb5-7d30cf6c7d97","resourceVersion":"442","creationTimestamp":"2023-12-26T22:23:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"86d64134-44b9-4f35-8c5d-6492f5e0552e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d64134-44b9-4f35-8c5d-6492f5e0552e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1226 22:23:40.640679  766058 system_pods.go:59] 8 kube-system pods found
	I1226 22:23:40.640712  766058 system_pods.go:61] "coredns-5dd5756b68-k29sm" [931cdf23-56fe-45a4-afb5-7d30cf6c7d97] Running
	I1226 22:23:40.640718  766058 system_pods.go:61] "etcd-multinode-772557" [f03b0f35-667b-4397-8661-975404c492e6] Running
	I1226 22:23:40.640724  766058 system_pods.go:61] "kindnet-xkncj" [dbf3a7e0-3a68-43d0-9ec1-5ec07e8f72ca] Running
	I1226 22:23:40.640729  766058 system_pods.go:61] "kube-apiserver-multinode-772557" [afac54c2-df76-44f8-84ea-d9fd949afd91] Running
	I1226 22:23:40.640737  766058 system_pods.go:61] "kube-controller-manager-multinode-772557" [40cdd4d3-8f44-4eba-8df7-904793fc4571] Running
	I1226 22:23:40.640742  766058 system_pods.go:61] "kube-proxy-q2rbf" [4ef274a5-a036-4559-babc-232be6318956] Running
	I1226 22:23:40.640748  766058 system_pods.go:61] "kube-scheduler-multinode-772557" [b424c74a-800c-4bd8-b8d3-ac5bb5afe0ba] Running
	I1226 22:23:40.640753  766058 system_pods.go:61] "storage-provisioner" [f7fbeb0e-5dd7-4776-a9b6-5e219f6c6e4b] Running
	I1226 22:23:40.640764  766058 system_pods.go:74] duration metric: took 177.837027ms to wait for pod list to return data ...
	I1226 22:23:40.640772  766058 default_sa.go:34] waiting for default service account to be created ...
	I1226 22:23:40.835077  766058 request.go:629] Waited for 194.193968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1226 22:23:40.835152  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1226 22:23:40.835161  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:40.835171  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:40.835181  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:40.837869  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:40.837905  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:40.837914  766058 round_trippers.go:580]     Audit-Id: 598a8ffe-d55d-4d0d-8909-910596751a12
	I1226 22:23:40.837920  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:40.837942  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:40.837963  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:40.837972  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:40.837979  766058 round_trippers.go:580]     Content-Length: 261
	I1226 22:23:40.837988  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:40 GMT
	I1226 22:23:40.838017  766058 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"098c3710-5af1-4cea-9a7e-016271092a86","resourceVersion":"362","creationTimestamp":"2023-12-26T22:23:06Z"}}]}
	I1226 22:23:40.838226  766058 default_sa.go:45] found service account: "default"
	I1226 22:23:40.838243  766058 default_sa.go:55] duration metric: took 197.464564ms for default service account to be created ...
	I1226 22:23:40.838251  766058 system_pods.go:116] waiting for k8s-apps to be running ...
	I1226 22:23:41.034675  766058 request.go:629] Waited for 196.344253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:23:41.034733  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:23:41.034739  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:41.034748  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:41.034759  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:41.039514  766058 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:23:41.039542  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:41.039551  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:41.039558  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:41.039571  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:41.039578  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:41 GMT
	I1226 22:23:41.039585  766058 round_trippers.go:580]     Audit-Id: 37c43712-b0b1-440b-9626-3076077a219d
	I1226 22:23:41.039596  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:41.040271  766058 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-5dd5756b68-k29sm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"931cdf23-56fe-45a4-afb5-7d30cf6c7d97","resourceVersion":"442","creationTimestamp":"2023-12-26T22:23:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"86d64134-44b9-4f35-8c5d-6492f5e0552e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d64134-44b9-4f35-8c5d-6492f5e0552e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1226 22:23:41.042619  766058 system_pods.go:86] 8 kube-system pods found
	I1226 22:23:41.042650  766058 system_pods.go:89] "coredns-5dd5756b68-k29sm" [931cdf23-56fe-45a4-afb5-7d30cf6c7d97] Running
	I1226 22:23:41.042657  766058 system_pods.go:89] "etcd-multinode-772557" [f03b0f35-667b-4397-8661-975404c492e6] Running
	I1226 22:23:41.042662  766058 system_pods.go:89] "kindnet-xkncj" [dbf3a7e0-3a68-43d0-9ec1-5ec07e8f72ca] Running
	I1226 22:23:41.042667  766058 system_pods.go:89] "kube-apiserver-multinode-772557" [afac54c2-df76-44f8-84ea-d9fd949afd91] Running
	I1226 22:23:41.042673  766058 system_pods.go:89] "kube-controller-manager-multinode-772557" [40cdd4d3-8f44-4eba-8df7-904793fc4571] Running
	I1226 22:23:41.042678  766058 system_pods.go:89] "kube-proxy-q2rbf" [4ef274a5-a036-4559-babc-232be6318956] Running
	I1226 22:23:41.042682  766058 system_pods.go:89] "kube-scheduler-multinode-772557" [b424c74a-800c-4bd8-b8d3-ac5bb5afe0ba] Running
	I1226 22:23:41.042691  766058 system_pods.go:89] "storage-provisioner" [f7fbeb0e-5dd7-4776-a9b6-5e219f6c6e4b] Running
	I1226 22:23:41.042698  766058 system_pods.go:126] duration metric: took 204.442167ms to wait for k8s-apps to be running ...
	I1226 22:23:41.042709  766058 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 22:23:41.042765  766058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:23:41.056187  766058 system_svc.go:56] duration metric: took 13.468039ms WaitForService to wait for kubelet.
	I1226 22:23:41.056253  766058 kubeadm.go:581] duration metric: took 33.138351739s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 22:23:41.056278  766058 node_conditions.go:102] verifying NodePressure condition ...
	I1226 22:23:41.234665  766058 request.go:629] Waited for 178.316418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1226 22:23:41.234740  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1226 22:23:41.234749  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:41.234758  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:41.234766  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:41.237305  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:41.237374  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:41.237397  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:41.237412  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:41.237420  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:41.237426  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:41 GMT
	I1226 22:23:41.237447  766058 round_trippers.go:580]     Audit-Id: c2d92bcc-8179-4547-82e5-c1498b0e75d0
	I1226 22:23:41.237461  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:41.237616  766058 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1226 22:23:41.238092  766058 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1226 22:23:41.238120  766058 node_conditions.go:123] node cpu capacity is 2
	I1226 22:23:41.238131  766058 node_conditions.go:105] duration metric: took 181.848176ms to run NodePressure ...
	I1226 22:23:41.238143  766058 start.go:228] waiting for startup goroutines ...
	I1226 22:23:41.238150  766058 start.go:233] waiting for cluster config update ...
	I1226 22:23:41.238164  766058 start.go:242] writing updated cluster config ...
	I1226 22:23:41.240956  766058 out.go:177] 
	I1226 22:23:41.242976  766058 config.go:182] Loaded profile config "multinode-772557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:23:41.243070  766058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/config.json ...
	I1226 22:23:41.245399  766058 out.go:177] * Starting worker node multinode-772557-m02 in cluster multinode-772557
	I1226 22:23:41.247440  766058 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:23:41.249495  766058 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:23:41.251381  766058 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:23:41.251413  766058 cache.go:56] Caching tarball of preloaded images
	I1226 22:23:41.251472  766058 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 22:23:41.251509  766058 preload.go:174] Found /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1226 22:23:41.251534  766058 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1226 22:23:41.251646  766058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/config.json ...
	I1226 22:23:41.270262  766058 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon, skipping pull
	I1226 22:23:41.270283  766058 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in daemon, skipping load
	I1226 22:23:41.270298  766058 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:23:41.270326  766058 start.go:365] acquiring machines lock for multinode-772557-m02: {Name:mkc7fe0ede268082442527f309e939d2a2d047ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:23:41.270432  766058 start.go:369] acquired machines lock for "multinode-772557-m02" in 91.058µs
	I1226 22:23:41.270456  766058 start.go:93] Provisioning new machine with config: &{Name:multinode-772557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-772557 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1226 22:23:41.270534  766058 start.go:125] createHost starting for "m02" (driver="docker")
	I1226 22:23:41.273672  766058 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 22:23:41.273785  766058 start.go:159] libmachine.API.Create for "multinode-772557" (driver="docker")
	I1226 22:23:41.273811  766058 client.go:168] LocalClient.Create starting
	I1226 22:23:41.273883  766058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem
	I1226 22:23:41.273922  766058 main.go:141] libmachine: Decoding PEM data...
	I1226 22:23:41.273941  766058 main.go:141] libmachine: Parsing certificate...
	I1226 22:23:41.274007  766058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem
	I1226 22:23:41.274031  766058 main.go:141] libmachine: Decoding PEM data...
	I1226 22:23:41.274042  766058 main.go:141] libmachine: Parsing certificate...
	I1226 22:23:41.274281  766058 cli_runner.go:164] Run: docker network inspect multinode-772557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:23:41.292394  766058 network_create.go:77] Found existing network {name:multinode-772557 subnet:0x40029d59e0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1226 22:23:41.292436  766058 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-772557-m02" container
	I1226 22:23:41.292507  766058 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 22:23:41.313215  766058 cli_runner.go:164] Run: docker volume create multinode-772557-m02 --label name.minikube.sigs.k8s.io=multinode-772557-m02 --label created_by.minikube.sigs.k8s.io=true
	I1226 22:23:41.343370  766058 oci.go:103] Successfully created a docker volume multinode-772557-m02
	I1226 22:23:41.343467  766058 cli_runner.go:164] Run: docker run --rm --name multinode-772557-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-772557-m02 --entrypoint /usr/bin/test -v multinode-772557-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -d /var/lib
	I1226 22:23:41.913627  766058 oci.go:107] Successfully prepared a docker volume multinode-772557-m02
	I1226 22:23:41.913664  766058 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 22:23:41.913683  766058 kic.go:194] Starting extracting preloaded images to volume ...
	I1226 22:23:41.913773  766058 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-772557-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir
	I1226 22:23:46.295128  766058 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-772557-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c -I lz4 -xf /preloaded.tar -C /extractDir: (4.381312045s)
	I1226 22:23:46.295158  766058 kic.go:203] duration metric: took 4.381472 seconds to extract preloaded images to volume
	W1226 22:23:46.295309  766058 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 22:23:46.295433  766058 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 22:23:46.377281  766058 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-772557-m02 --name multinode-772557-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-772557-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-772557-m02 --network multinode-772557 --ip 192.168.58.3 --volume multinode-772557-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I1226 22:23:46.736731  766058 cli_runner.go:164] Run: docker container inspect multinode-772557-m02 --format={{.State.Running}}
	I1226 22:23:46.757427  766058 cli_runner.go:164] Run: docker container inspect multinode-772557-m02 --format={{.State.Status}}
	I1226 22:23:46.789068  766058 cli_runner.go:164] Run: docker exec multinode-772557-m02 stat /var/lib/dpkg/alternatives/iptables
	I1226 22:23:46.858137  766058 oci.go:144] the created container "multinode-772557-m02" has a running status.
	I1226 22:23:46.858164  766058 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557-m02/id_rsa...
	I1226 22:23:47.138498  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1226 22:23:47.138546  766058 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 22:23:47.174395  766058 cli_runner.go:164] Run: docker container inspect multinode-772557-m02 --format={{.State.Status}}
	I1226 22:23:47.207162  766058 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 22:23:47.207191  766058 kic_runner.go:114] Args: [docker exec --privileged multinode-772557-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 22:23:47.308048  766058 cli_runner.go:164] Run: docker container inspect multinode-772557-m02 --format={{.State.Status}}
	I1226 22:23:47.349547  766058 machine.go:88] provisioning docker machine ...
	I1226 22:23:47.349575  766058 ubuntu.go:169] provisioning hostname "multinode-772557-m02"
	I1226 22:23:47.349649  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557-m02
	I1226 22:23:47.407082  766058 main.go:141] libmachine: Using SSH client type: native
	I1226 22:23:47.407499  766058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33751 <nil> <nil>}
	I1226 22:23:47.407512  766058 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-772557-m02 && echo "multinode-772557-m02" | sudo tee /etc/hostname
	I1226 22:23:47.409762  766058 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1226 22:23:50.564133  766058 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-772557-m02
	
	I1226 22:23:50.564211  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557-m02
	I1226 22:23:50.582859  766058 main.go:141] libmachine: Using SSH client type: native
	I1226 22:23:50.583274  766058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33751 <nil> <nil>}
	I1226 22:23:50.583297  766058 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-772557-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-772557-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-772557-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:23:50.733841  766058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:23:50.733870  766058 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 22:23:50.733886  766058 ubuntu.go:177] setting up certificates
	I1226 22:23:50.733894  766058 provision.go:83] configureAuth start
	I1226 22:23:50.733955  766058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-772557-m02
	I1226 22:23:50.752252  766058 provision.go:138] copyHostCerts
	I1226 22:23:50.752293  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:23:50.752323  766058 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem, removing ...
	I1226 22:23:50.752330  766058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:23:50.752403  766058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 22:23:50.752481  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:23:50.752497  766058 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem, removing ...
	I1226 22:23:50.752501  766058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:23:50.752566  766058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 22:23:50.752644  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:23:50.752662  766058 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem, removing ...
	I1226 22:23:50.752666  766058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:23:50.752692  766058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 22:23:50.752752  766058 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.multinode-772557-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-772557-m02]
	I1226 22:23:50.981935  766058 provision.go:172] copyRemoteCerts
	I1226 22:23:50.982002  766058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:23:50.982043  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557-m02
	I1226 22:23:51.004285  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557-m02/id_rsa Username:docker}
	I1226 22:23:51.108725  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1226 22:23:51.108826  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1226 22:23:51.140504  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1226 22:23:51.140663  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 22:23:51.172733  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1226 22:23:51.172800  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:23:51.202750  766058 provision.go:86] duration metric: configureAuth took 468.841939ms
	I1226 22:23:51.202777  766058 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:23:51.202984  766058 config.go:182] Loaded profile config "multinode-772557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:23:51.203100  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557-m02
	I1226 22:23:51.221200  766058 main.go:141] libmachine: Using SSH client type: native
	I1226 22:23:51.221619  766058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33751 <nil> <nil>}
	I1226 22:23:51.221699  766058 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:23:51.482612  766058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:23:51.482638  766058 machine.go:91] provisioned docker machine in 4.133071565s
	I1226 22:23:51.482648  766058 client.go:171] LocalClient.Create took 10.208826319s
	I1226 22:23:51.482665  766058 start.go:167] duration metric: libmachine.API.Create for "multinode-772557" took 10.208879504s
	I1226 22:23:51.482674  766058 start.go:300] post-start starting for "multinode-772557-m02" (driver="docker")
	I1226 22:23:51.482684  766058 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:23:51.482748  766058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:23:51.482790  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557-m02
	I1226 22:23:51.501941  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557-m02/id_rsa Username:docker}
	I1226 22:23:51.603838  766058 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:23:51.607771  766058 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1226 22:23:51.607791  766058 command_runner.go:130] > NAME="Ubuntu"
	I1226 22:23:51.607798  766058 command_runner.go:130] > VERSION_ID="22.04"
	I1226 22:23:51.607805  766058 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1226 22:23:51.607811  766058 command_runner.go:130] > VERSION_CODENAME=jammy
	I1226 22:23:51.607815  766058 command_runner.go:130] > ID=ubuntu
	I1226 22:23:51.607821  766058 command_runner.go:130] > ID_LIKE=debian
	I1226 22:23:51.607827  766058 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1226 22:23:51.607833  766058 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1226 22:23:51.607844  766058 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1226 22:23:51.607853  766058 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1226 22:23:51.607858  766058 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1226 22:23:51.607903  766058 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:23:51.607925  766058 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:23:51.607936  766058 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:23:51.607943  766058 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1226 22:23:51.607953  766058 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 22:23:51.608011  766058 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 22:23:51.608089  766058 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> 7030362.pem in /etc/ssl/certs
	I1226 22:23:51.608097  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> /etc/ssl/certs/7030362.pem
	I1226 22:23:51.608199  766058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:23:51.618528  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:23:51.648243  766058 start.go:303] post-start completed in 165.553519ms
	I1226 22:23:51.648755  766058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-772557-m02
	I1226 22:23:51.667188  766058 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/config.json ...
	I1226 22:23:51.667474  766058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:23:51.667524  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557-m02
	I1226 22:23:51.687667  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557-m02/id_rsa Username:docker}
	I1226 22:23:51.790923  766058 command_runner.go:130] > 12%!
	(MISSING)I1226 22:23:51.791004  766058 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:23:51.796560  766058 command_runner.go:130] > 171G
	I1226 22:23:51.796958  766058 start.go:128] duration metric: createHost completed in 10.526411929s
	I1226 22:23:51.796976  766058 start.go:83] releasing machines lock for "multinode-772557-m02", held for 10.526535602s
	I1226 22:23:51.797048  766058 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-772557-m02
	I1226 22:23:51.817627  766058 out.go:177] * Found network options:
	I1226 22:23:51.819625  766058 out.go:177]   - NO_PROXY=192.168.58.2
	W1226 22:23:51.821318  766058 proxy.go:119] fail to check proxy env: Error ip not in block
	W1226 22:23:51.821355  766058 proxy.go:119] fail to check proxy env: Error ip not in block
	I1226 22:23:51.821430  766058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:23:51.821475  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557-m02
	I1226 22:23:51.821725  766058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:23:51.821777  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557-m02
	I1226 22:23:51.848857  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557-m02/id_rsa Username:docker}
	I1226 22:23:51.861972  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557-m02/id_rsa Username:docker}
	I1226 22:23:52.103428  766058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:23:52.103505  766058 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1226 22:23:52.108617  766058 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1226 22:23:52.108645  766058 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1226 22:23:52.108708  766058 command_runner.go:130] > Device: b3h/179d	Inode: 1302392     Links: 1
	I1226 22:23:52.108730  766058 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:23:52.108741  766058 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1226 22:23:52.108748  766058 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1226 22:23:52.108757  766058 command_runner.go:130] > Change: 2023-12-26 21:45:18.403362393 +0000
	I1226 22:23:52.108763  766058 command_runner.go:130] >  Birth: 2023-12-26 21:45:18.403362393 +0000
	I1226 22:23:52.109267  766058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:23:52.135024  766058 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:23:52.135175  766058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:23:52.176624  766058 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1226 22:23:52.176711  766058 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1226 22:23:52.176734  766058 start.go:475] detecting cgroup driver to use...
	I1226 22:23:52.176796  766058 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:23:52.176867  766058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:23:52.195997  766058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:23:52.211023  766058 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:23:52.211109  766058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:23:52.227139  766058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:23:52.244831  766058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1226 22:23:52.347052  766058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:23:52.448705  766058 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1226 22:23:52.448789  766058 docker.go:219] disabling docker service ...
	I1226 22:23:52.448881  766058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:23:52.472040  766058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:23:52.485818  766058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:23:52.499071  766058 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1226 22:23:52.598172  766058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:23:52.701904  766058 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1226 22:23:52.702052  766058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:23:52.715121  766058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:23:52.735265  766058 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1226 22:23:52.736618  766058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1226 22:23:52.736728  766058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:23:52.750722  766058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1226 22:23:52.750846  766058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:23:52.763534  766058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:23:52.777259  766058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:23:52.789645  766058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1226 22:23:52.801136  766058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1226 22:23:52.810348  766058 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1226 22:23:52.811383  766058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1226 22:23:52.821958  766058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1226 22:23:52.916134  766058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1226 22:23:53.038869  766058 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1226 22:23:53.039007  766058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1226 22:23:53.043641  766058 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1226 22:23:53.043668  766058 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1226 22:23:53.043677  766058 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1226 22:23:53.043686  766058 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:23:53.043692  766058 command_runner.go:130] > Access: 2023-12-26 22:23:53.022196936 +0000
	I1226 22:23:53.043705  766058 command_runner.go:130] > Modify: 2023-12-26 22:23:53.022196936 +0000
	I1226 22:23:53.043712  766058 command_runner.go:130] > Change: 2023-12-26 22:23:53.022196936 +0000
	I1226 22:23:53.043720  766058 command_runner.go:130] >  Birth: -
	I1226 22:23:53.043988  766058 start.go:543] Will wait 60s for crictl version
	I1226 22:23:53.044053  766058 ssh_runner.go:195] Run: which crictl
	I1226 22:23:53.049679  766058 command_runner.go:130] > /usr/bin/crictl
	I1226 22:23:53.049754  766058 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1226 22:23:53.101803  766058 command_runner.go:130] > Version:  0.1.0
	I1226 22:23:53.101883  766058 command_runner.go:130] > RuntimeName:  cri-o
	I1226 22:23:53.101903  766058 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1226 22:23:53.101928  766058 command_runner.go:130] > RuntimeApiVersion:  v1
	I1226 22:23:53.104316  766058 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1226 22:23:53.104419  766058 ssh_runner.go:195] Run: crio --version
	I1226 22:23:53.145531  766058 command_runner.go:130] > crio version 1.24.6
	I1226 22:23:53.145552  766058 command_runner.go:130] > Version:          1.24.6
	I1226 22:23:53.145562  766058 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1226 22:23:53.145567  766058 command_runner.go:130] > GitTreeState:     clean
	I1226 22:23:53.145574  766058 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1226 22:23:53.145579  766058 command_runner.go:130] > GoVersion:        go1.18.2
	I1226 22:23:53.145584  766058 command_runner.go:130] > Compiler:         gc
	I1226 22:23:53.145590  766058 command_runner.go:130] > Platform:         linux/arm64
	I1226 22:23:53.145605  766058 command_runner.go:130] > Linkmode:         dynamic
	I1226 22:23:53.145622  766058 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1226 22:23:53.145629  766058 command_runner.go:130] > SeccompEnabled:   true
	I1226 22:23:53.145634  766058 command_runner.go:130] > AppArmorEnabled:  false
	I1226 22:23:53.147512  766058 ssh_runner.go:195] Run: crio --version
	I1226 22:23:53.190647  766058 command_runner.go:130] > crio version 1.24.6
	I1226 22:23:53.190673  766058 command_runner.go:130] > Version:          1.24.6
	I1226 22:23:53.190683  766058 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1226 22:23:53.190689  766058 command_runner.go:130] > GitTreeState:     clean
	I1226 22:23:53.190697  766058 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1226 22:23:53.190703  766058 command_runner.go:130] > GoVersion:        go1.18.2
	I1226 22:23:53.190709  766058 command_runner.go:130] > Compiler:         gc
	I1226 22:23:53.190714  766058 command_runner.go:130] > Platform:         linux/arm64
	I1226 22:23:53.190723  766058 command_runner.go:130] > Linkmode:         dynamic
	I1226 22:23:53.190736  766058 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1226 22:23:53.190743  766058 command_runner.go:130] > SeccompEnabled:   true
	I1226 22:23:53.190751  766058 command_runner.go:130] > AppArmorEnabled:  false
	I1226 22:23:53.194547  766058 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1226 22:23:53.196308  766058 out.go:177]   - env NO_PROXY=192.168.58.2
	I1226 22:23:53.198150  766058 cli_runner.go:164] Run: docker network inspect multinode-772557 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:23:53.216208  766058 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1226 22:23:53.221167  766058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:23:53.235594  766058 certs.go:56] Setting up /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557 for IP: 192.168.58.3
	I1226 22:23:53.235627  766058 certs.go:190] acquiring lock for shared ca certs: {Name:mke6488a150c186a525017f74b8a69a9f5240d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1226 22:23:53.235762  766058 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key
	I1226 22:23:53.235811  766058 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key
	I1226 22:23:53.235827  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1226 22:23:53.235841  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1226 22:23:53.235858  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1226 22:23:53.235869  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1226 22:23:53.235922  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem (1338 bytes)
	W1226 22:23:53.235955  766058 certs.go:433] ignoring /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036_empty.pem, impossibly tiny 0 bytes
	I1226 22:23:53.235968  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem (1675 bytes)
	I1226 22:23:53.235993  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem (1082 bytes)
	I1226 22:23:53.236035  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem (1123 bytes)
	I1226 22:23:53.236063  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem (1679 bytes)
	I1226 22:23:53.236109  766058 certs.go:437] found cert: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:23:53.236141  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> /usr/share/ca-certificates/7030362.pem
	I1226 22:23:53.236169  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:23:53.236185  766058 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem -> /usr/share/ca-certificates/703036.pem
	I1226 22:23:53.236535  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1226 22:23:53.266408  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1226 22:23:53.295458  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1226 22:23:53.324267  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1226 22:23:53.355336  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /usr/share/ca-certificates/7030362.pem (1708 bytes)
	I1226 22:23:53.384508  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1226 22:23:53.415167  766058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/703036.pem --> /usr/share/ca-certificates/703036.pem (1338 bytes)
	I1226 22:23:53.446263  766058 ssh_runner.go:195] Run: openssl version
	I1226 22:23:53.453209  766058 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1226 22:23:53.453812  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7030362.pem && ln -fs /usr/share/ca-certificates/7030362.pem /etc/ssl/certs/7030362.pem"
	I1226 22:23:53.466944  766058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7030362.pem
	I1226 22:23:53.471688  766058 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 26 21:58 /usr/share/ca-certificates/7030362.pem
	I1226 22:23:53.471721  766058 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 26 21:58 /usr/share/ca-certificates/7030362.pem
	I1226 22:23:53.471772  766058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7030362.pem
	I1226 22:23:53.480083  766058 command_runner.go:130] > 3ec20f2e
	I1226 22:23:53.480507  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7030362.pem /etc/ssl/certs/3ec20f2e.0"
	I1226 22:23:53.492072  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1226 22:23:53.503827  766058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:23:53.509476  766058 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:23:53.509810  766058 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 26 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:23:53.509922  766058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1226 22:23:53.518428  766058 command_runner.go:130] > b5213941
	I1226 22:23:53.518722  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1226 22:23:53.530183  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/703036.pem && ln -fs /usr/share/ca-certificates/703036.pem /etc/ssl/certs/703036.pem"
	I1226 22:23:53.542479  766058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/703036.pem
	I1226 22:23:53.547246  766058 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 26 21:58 /usr/share/ca-certificates/703036.pem
	I1226 22:23:53.547323  766058 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 26 21:58 /usr/share/ca-certificates/703036.pem
	I1226 22:23:53.547388  766058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/703036.pem
	I1226 22:23:53.556427  766058 command_runner.go:130] > 51391683
	I1226 22:23:53.556509  766058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/703036.pem /etc/ssl/certs/51391683.0"
	I1226 22:23:53.568419  766058 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1226 22:23:53.573010  766058 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:23:53.573045  766058 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1226 22:23:53.573196  766058 ssh_runner.go:195] Run: crio config
	I1226 22:23:53.628201  766058 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1226 22:23:53.628228  766058 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1226 22:23:53.628237  766058 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1226 22:23:53.628241  766058 command_runner.go:130] > #
	I1226 22:23:53.628249  766058 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1226 22:23:53.628257  766058 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1226 22:23:53.628264  766058 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1226 22:23:53.628278  766058 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1226 22:23:53.628283  766058 command_runner.go:130] > # reload'.
	I1226 22:23:53.628292  766058 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1226 22:23:53.628301  766058 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1226 22:23:53.628312  766058 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1226 22:23:53.628320  766058 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1226 22:23:53.628328  766058 command_runner.go:130] > [crio]
	I1226 22:23:53.628336  766058 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1226 22:23:53.628347  766058 command_runner.go:130] > # containers images, in this directory.
	I1226 22:23:53.628356  766058 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1226 22:23:53.628368  766058 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1226 22:23:53.628377  766058 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1226 22:23:53.628385  766058 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1226 22:23:53.628396  766058 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1226 22:23:53.628401  766058 command_runner.go:130] > # storage_driver = "vfs"
	I1226 22:23:53.628408  766058 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1226 22:23:53.628419  766058 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1226 22:23:53.628424  766058 command_runner.go:130] > # storage_option = [
	I1226 22:23:53.628432  766058 command_runner.go:130] > # ]
	I1226 22:23:53.628439  766058 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1226 22:23:53.628447  766058 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1226 22:23:53.628453  766058 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1226 22:23:53.628467  766058 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1226 22:23:53.628481  766058 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1226 22:23:53.628487  766058 command_runner.go:130] > # always happen on a node reboot
	I1226 22:23:53.628497  766058 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1226 22:23:53.628504  766058 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1226 22:23:53.628525  766058 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1226 22:23:53.628536  766058 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1226 22:23:53.628544  766058 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1226 22:23:53.628554  766058 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1226 22:23:53.628567  766058 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1226 22:23:53.628573  766058 command_runner.go:130] > # internal_wipe = true
	I1226 22:23:53.628584  766058 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1226 22:23:53.628593  766058 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1226 22:23:53.628603  766058 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1226 22:23:53.628610  766058 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1226 22:23:53.628622  766058 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1226 22:23:53.628627  766058 command_runner.go:130] > [crio.api]
	I1226 22:23:53.628634  766058 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1226 22:23:53.628641  766058 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1226 22:23:53.628648  766058 command_runner.go:130] > # IP address on which the stream server will listen.
	I1226 22:23:53.628657  766058 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1226 22:23:53.628666  766058 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1226 22:23:53.628676  766058 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1226 22:23:53.628681  766058 command_runner.go:130] > # stream_port = "0"
	I1226 22:23:53.628687  766058 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1226 22:23:53.628693  766058 command_runner.go:130] > # stream_enable_tls = false
	I1226 22:23:53.628701  766058 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1226 22:23:53.628709  766058 command_runner.go:130] > # stream_idle_timeout = ""
	I1226 22:23:53.628717  766058 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1226 22:23:53.628725  766058 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1226 22:23:53.628731  766058 command_runner.go:130] > # minutes.
	I1226 22:23:53.628736  766058 command_runner.go:130] > # stream_tls_cert = ""
	I1226 22:23:53.628744  766058 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1226 22:23:53.628757  766058 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1226 22:23:53.628762  766058 command_runner.go:130] > # stream_tls_key = ""
	I1226 22:23:53.628775  766058 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1226 22:23:53.628783  766058 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1226 22:23:53.628794  766058 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1226 22:23:53.628799  766058 command_runner.go:130] > # stream_tls_ca = ""
	I1226 22:23:53.628809  766058 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1226 22:23:53.628817  766058 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1226 22:23:53.628826  766058 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1226 22:23:53.628836  766058 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1226 22:23:53.628849  766058 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1226 22:23:53.628860  766058 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1226 22:23:53.628864  766058 command_runner.go:130] > [crio.runtime]
	I1226 22:23:53.628873  766058 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1226 22:23:53.628880  766058 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1226 22:23:53.628888  766058 command_runner.go:130] > # "nofile=1024:2048"
	I1226 22:23:53.628896  766058 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1226 22:23:53.628901  766058 command_runner.go:130] > # default_ulimits = [
	I1226 22:23:53.628906  766058 command_runner.go:130] > # ]
	I1226 22:23:53.628916  766058 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1226 22:23:53.628926  766058 command_runner.go:130] > # no_pivot = false
	I1226 22:23:53.628934  766058 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1226 22:23:53.628946  766058 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1226 22:23:53.628953  766058 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1226 22:23:53.628964  766058 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1226 22:23:53.628971  766058 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1226 22:23:53.628980  766058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1226 22:23:53.628988  766058 command_runner.go:130] > # conmon = ""
	I1226 22:23:53.628994  766058 command_runner.go:130] > # Cgroup setting for conmon
	I1226 22:23:53.629003  766058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1226 22:23:53.629010  766058 command_runner.go:130] > conmon_cgroup = "pod"
	I1226 22:23:53.629018  766058 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1226 22:23:53.629027  766058 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1226 22:23:53.629036  766058 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1226 22:23:53.629045  766058 command_runner.go:130] > # conmon_env = [
	I1226 22:23:53.629049  766058 command_runner.go:130] > # ]
	I1226 22:23:53.629057  766058 command_runner.go:130] > # Additional environment variables to set for all the
	I1226 22:23:53.629067  766058 command_runner.go:130] > # containers. These are overridden if set in the
	I1226 22:23:53.629074  766058 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1226 22:23:53.629079  766058 command_runner.go:130] > # default_env = [
	I1226 22:23:53.629085  766058 command_runner.go:130] > # ]
	I1226 22:23:53.629095  766058 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1226 22:23:53.629101  766058 command_runner.go:130] > # selinux = false
	I1226 22:23:53.629111  766058 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1226 22:23:53.629119  766058 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1226 22:23:53.629130  766058 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1226 22:23:53.629135  766058 command_runner.go:130] > # seccomp_profile = ""
	I1226 22:23:53.629146  766058 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1226 22:23:53.629154  766058 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1226 22:23:53.629165  766058 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1226 22:23:53.629170  766058 command_runner.go:130] > # which might increase security.
	I1226 22:23:53.629177  766058 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1226 22:23:53.629188  766058 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1226 22:23:53.629196  766058 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1226 22:23:53.629207  766058 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1226 22:23:53.629215  766058 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1226 22:23:53.629221  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:23:53.629227  766058 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1226 22:23:53.629237  766058 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1226 22:23:53.629243  766058 command_runner.go:130] > # the cgroup blockio controller.
	I1226 22:23:53.629253  766058 command_runner.go:130] > # blockio_config_file = ""
	I1226 22:23:53.629261  766058 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1226 22:23:53.629266  766058 command_runner.go:130] > # irqbalance daemon.
	I1226 22:23:53.629273  766058 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1226 22:23:53.629281  766058 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1226 22:23:53.629291  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:23:53.629296  766058 command_runner.go:130] > # rdt_config_file = ""
	I1226 22:23:53.629303  766058 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1226 22:23:53.629312  766058 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1226 22:23:53.629320  766058 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1226 22:23:53.629328  766058 command_runner.go:130] > # separate_pull_cgroup = ""
	I1226 22:23:53.629337  766058 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1226 22:23:53.629344  766058 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1226 22:23:53.629351  766058 command_runner.go:130] > # will be added.
	I1226 22:23:53.629357  766058 command_runner.go:130] > # default_capabilities = [
	I1226 22:23:53.629362  766058 command_runner.go:130] > # 	"CHOWN",
	I1226 22:23:53.629369  766058 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1226 22:23:53.629378  766058 command_runner.go:130] > # 	"FSETID",
	I1226 22:23:53.629382  766058 command_runner.go:130] > # 	"FOWNER",
	I1226 22:23:53.629387  766058 command_runner.go:130] > # 	"SETGID",
	I1226 22:23:53.629396  766058 command_runner.go:130] > # 	"SETUID",
	I1226 22:23:53.629401  766058 command_runner.go:130] > # 	"SETPCAP",
	I1226 22:23:53.629406  766058 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1226 22:23:53.629415  766058 command_runner.go:130] > # 	"KILL",
	I1226 22:23:53.629420  766058 command_runner.go:130] > # ]
	I1226 22:23:53.629429  766058 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1226 22:23:53.629438  766058 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1226 22:23:53.629446  766058 command_runner.go:130] > # add_inheritable_capabilities = true
	I1226 22:23:53.629454  766058 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1226 22:23:53.629465  766058 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1226 22:23:53.629470  766058 command_runner.go:130] > # default_sysctls = [
	I1226 22:23:53.629475  766058 command_runner.go:130] > # ]
	I1226 22:23:53.629481  766058 command_runner.go:130] > # List of devices on the host that a
	I1226 22:23:53.629493  766058 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1226 22:23:53.629499  766058 command_runner.go:130] > # allowed_devices = [
	I1226 22:23:53.629514  766058 command_runner.go:130] > # 	"/dev/fuse",
	I1226 22:23:53.629518  766058 command_runner.go:130] > # ]
	I1226 22:23:53.629525  766058 command_runner.go:130] > # List of additional devices. specified as
	I1226 22:23:53.629542  766058 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1226 22:23:53.629555  766058 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1226 22:23:53.629567  766058 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1226 22:23:53.629576  766058 command_runner.go:130] > # additional_devices = [
	I1226 22:23:53.629581  766058 command_runner.go:130] > # ]
	I1226 22:23:53.629588  766058 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1226 22:23:53.629596  766058 command_runner.go:130] > # cdi_spec_dirs = [
	I1226 22:23:53.629601  766058 command_runner.go:130] > # 	"/etc/cdi",
	I1226 22:23:53.629606  766058 command_runner.go:130] > # 	"/var/run/cdi",
	I1226 22:23:53.629610  766058 command_runner.go:130] > # ]
	I1226 22:23:53.629618  766058 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1226 22:23:53.629628  766058 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1226 22:23:53.629633  766058 command_runner.go:130] > # Defaults to false.
	I1226 22:23:53.629645  766058 command_runner.go:130] > # device_ownership_from_security_context = false
	I1226 22:23:53.629653  766058 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1226 22:23:53.629664  766058 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1226 22:23:53.629669  766058 command_runner.go:130] > # hooks_dir = [
	I1226 22:23:53.629675  766058 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1226 22:23:53.629683  766058 command_runner.go:130] > # ]
	I1226 22:23:53.629690  766058 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1226 22:23:53.629698  766058 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1226 22:23:53.629705  766058 command_runner.go:130] > # its default mounts from the following two files:
	I1226 22:23:53.629711  766058 command_runner.go:130] > #
	I1226 22:23:53.629719  766058 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1226 22:23:53.629728  766058 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1226 22:23:53.629739  766058 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1226 22:23:53.629744  766058 command_runner.go:130] > #
	I1226 22:23:53.629757  766058 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1226 22:23:53.629765  766058 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1226 22:23:53.629776  766058 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1226 22:23:53.629783  766058 command_runner.go:130] > #      only add mounts it finds in this file.
	I1226 22:23:53.629787  766058 command_runner.go:130] > #
	I1226 22:23:53.629792  766058 command_runner.go:130] > # default_mounts_file = ""
	I1226 22:23:53.629801  766058 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1226 22:23:53.629810  766058 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1226 22:23:53.629817  766058 command_runner.go:130] > # pids_limit = 0
	I1226 22:23:53.629830  766058 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1226 22:23:53.629838  766058 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1226 22:23:53.629849  766058 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1226 22:23:53.629859  766058 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1226 22:23:53.629869  766058 command_runner.go:130] > # log_size_max = -1
	I1226 22:23:53.629877  766058 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1226 22:23:53.629883  766058 command_runner.go:130] > # log_to_journald = false
	I1226 22:23:53.629893  766058 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1226 22:23:53.629899  766058 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1226 22:23:53.629906  766058 command_runner.go:130] > # Path to directory for container attach sockets.
	I1226 22:23:53.629917  766058 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1226 22:23:53.629926  766058 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1226 22:23:53.629934  766058 command_runner.go:130] > # bind_mount_prefix = ""
	I1226 22:23:53.629942  766058 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1226 22:23:53.629951  766058 command_runner.go:130] > # read_only = false
	I1226 22:23:53.629959  766058 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1226 22:23:53.629967  766058 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1226 22:23:53.629973  766058 command_runner.go:130] > # live configuration reload.
	I1226 22:23:53.629980  766058 command_runner.go:130] > # log_level = "info"
	I1226 22:23:53.629987  766058 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1226 22:23:53.629994  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:23:53.630003  766058 command_runner.go:130] > # log_filter = ""
	I1226 22:23:53.630011  766058 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1226 22:23:53.630023  766058 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1226 22:23:53.630028  766058 command_runner.go:130] > # separated by comma.
	I1226 22:23:53.630033  766058 command_runner.go:130] > # uid_mappings = ""
	I1226 22:23:53.630041  766058 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1226 22:23:53.630048  766058 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1226 22:23:53.630053  766058 command_runner.go:130] > # separated by comma.
	I1226 22:23:53.630058  766058 command_runner.go:130] > # gid_mappings = ""
	I1226 22:23:53.630066  766058 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1226 22:23:53.630076  766058 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1226 22:23:53.630084  766058 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1226 22:23:53.630093  766058 command_runner.go:130] > # minimum_mappable_uid = -1
	I1226 22:23:53.630101  766058 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1226 22:23:53.630112  766058 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1226 22:23:53.630120  766058 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1226 22:23:53.630129  766058 command_runner.go:130] > # minimum_mappable_gid = -1
	I1226 22:23:53.630137  766058 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1226 22:23:53.630148  766058 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1226 22:23:53.630155  766058 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1226 22:23:53.630161  766058 command_runner.go:130] > # ctr_stop_timeout = 30
	I1226 22:23:53.630168  766058 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1226 22:23:53.630176  766058 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1226 22:23:53.630186  766058 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1226 22:23:53.630192  766058 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1226 22:23:53.630201  766058 command_runner.go:130] > # drop_infra_ctr = true
	I1226 22:23:53.630210  766058 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1226 22:23:53.630220  766058 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1226 22:23:53.630230  766058 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1226 22:23:53.630236  766058 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1226 22:23:53.630244  766058 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1226 22:23:53.630254  766058 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1226 22:23:53.630260  766058 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1226 22:23:53.630268  766058 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1226 22:23:53.630277  766058 command_runner.go:130] > # pinns_path = ""
	I1226 22:23:53.630285  766058 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1226 22:23:53.630296  766058 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1226 22:23:53.630304  766058 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1226 22:23:53.630310  766058 command_runner.go:130] > # default_runtime = "runc"
	I1226 22:23:53.630316  766058 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1226 22:23:53.630326  766058 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1226 22:23:53.630341  766058 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1226 22:23:53.630347  766058 command_runner.go:130] > # creation as a file is not desired either.
	I1226 22:23:53.630357  766058 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1226 22:23:53.630369  766058 command_runner.go:130] > # the hostname is being managed dynamically.
	I1226 22:23:53.630375  766058 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1226 22:23:53.630380  766058 command_runner.go:130] > # ]
	I1226 22:23:53.630388  766058 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1226 22:23:53.630396  766058 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1226 22:23:53.630408  766058 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1226 22:23:53.630416  766058 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1226 22:23:53.630423  766058 command_runner.go:130] > #
	I1226 22:23:53.630429  766058 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1226 22:23:53.630436  766058 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1226 22:23:53.630444  766058 command_runner.go:130] > #  runtime_type = "oci"
	I1226 22:23:53.630451  766058 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1226 22:23:53.630462  766058 command_runner.go:130] > #  privileged_without_host_devices = false
	I1226 22:23:53.630467  766058 command_runner.go:130] > #  allowed_annotations = []
	I1226 22:23:53.630472  766058 command_runner.go:130] > # Where:
	I1226 22:23:53.630478  766058 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1226 22:23:53.630486  766058 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1226 22:23:53.630495  766058 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1226 22:23:53.630506  766058 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1226 22:23:53.630511  766058 command_runner.go:130] > #   in $PATH.
	I1226 22:23:53.630519  766058 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1226 22:23:53.630530  766058 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1226 22:23:53.630542  766058 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1226 22:23:53.630547  766058 command_runner.go:130] > #   state.
	I1226 22:23:53.630555  766058 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1226 22:23:53.630562  766058 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1226 22:23:53.630570  766058 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1226 22:23:53.630580  766058 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1226 22:23:53.630588  766058 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1226 22:23:53.630600  766058 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1226 22:23:53.630606  766058 command_runner.go:130] > #   The currently recognized values are:
	I1226 22:23:53.630618  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1226 22:23:53.630627  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1226 22:23:53.630634  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1226 22:23:53.630642  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1226 22:23:53.630655  766058 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1226 22:23:53.630663  766058 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1226 22:23:53.630676  766058 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1226 22:23:53.630685  766058 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1226 22:23:53.630695  766058 command_runner.go:130] > #   should be moved to the container's cgroup
	I1226 22:23:53.630701  766058 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1226 22:23:53.630707  766058 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1226 22:23:53.630712  766058 command_runner.go:130] > runtime_type = "oci"
	I1226 22:23:53.630718  766058 command_runner.go:130] > runtime_root = "/run/runc"
	I1226 22:23:53.630728  766058 command_runner.go:130] > runtime_config_path = ""
	I1226 22:23:53.630733  766058 command_runner.go:130] > monitor_path = ""
	I1226 22:23:53.630739  766058 command_runner.go:130] > monitor_cgroup = ""
	I1226 22:23:53.630747  766058 command_runner.go:130] > monitor_exec_cgroup = ""
	I1226 22:23:53.630775  766058 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1226 22:23:53.630784  766058 command_runner.go:130] > # running containers
	I1226 22:23:53.630790  766058 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1226 22:23:53.630798  766058 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1226 22:23:53.630806  766058 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1226 22:23:53.630813  766058 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1226 22:23:53.630824  766058 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1226 22:23:53.630830  766058 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1226 22:23:53.630840  766058 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1226 22:23:53.630848  766058 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1226 22:23:53.630858  766058 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1226 22:23:53.630864  766058 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1226 22:23:53.630872  766058 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1226 22:23:53.630879  766058 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1226 22:23:53.630887  766058 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1226 22:23:53.630899  766058 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1226 22:23:53.630909  766058 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1226 22:23:53.630920  766058 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1226 22:23:53.630931  766058 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1226 22:23:53.630944  766058 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1226 22:23:53.630952  766058 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1226 22:23:53.630961  766058 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1226 22:23:53.630966  766058 command_runner.go:130] > # Example:
	I1226 22:23:53.630972  766058 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1226 22:23:53.630978  766058 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1226 22:23:53.630985  766058 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1226 22:23:53.630996  766058 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1226 22:23:53.631001  766058 command_runner.go:130] > # cpuset = 0
	I1226 22:23:53.631011  766058 command_runner.go:130] > # cpushares = "0-1"
	I1226 22:23:53.631016  766058 command_runner.go:130] > # Where:
	I1226 22:23:53.631022  766058 command_runner.go:130] > # The workload name is workload-type.
	I1226 22:23:53.631035  766058 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1226 22:23:53.631042  766058 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1226 22:23:53.631049  766058 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1226 22:23:53.631059  766058 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1226 22:23:53.631069  766058 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1226 22:23:53.631074  766058 command_runner.go:130] > # 
	I1226 22:23:53.631082  766058 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1226 22:23:53.631097  766058 command_runner.go:130] > #
	I1226 22:23:53.631105  766058 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1226 22:23:53.631117  766058 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1226 22:23:53.631125  766058 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1226 22:23:53.631133  766058 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1226 22:23:53.631141  766058 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1226 22:23:53.631146  766058 command_runner.go:130] > [crio.image]
	I1226 22:23:53.631158  766058 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1226 22:23:53.631164  766058 command_runner.go:130] > # default_transport = "docker://"
	I1226 22:23:53.631176  766058 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1226 22:23:53.631184  766058 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1226 22:23:53.631193  766058 command_runner.go:130] > # global_auth_file = ""
	I1226 22:23:53.631200  766058 command_runner.go:130] > # The image used to instantiate infra containers.
	I1226 22:23:53.631206  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:23:53.631212  766058 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1226 22:23:53.631220  766058 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1226 22:23:53.631232  766058 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1226 22:23:53.631240  766058 command_runner.go:130] > # This option supports live configuration reload.
	I1226 22:23:53.631248  766058 command_runner.go:130] > # pause_image_auth_file = ""
	I1226 22:23:53.631256  766058 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1226 22:23:53.631267  766058 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1226 22:23:53.631275  766058 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1226 22:23:53.631287  766058 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1226 22:23:53.631293  766058 command_runner.go:130] > # pause_command = "/pause"
	I1226 22:23:53.631301  766058 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1226 22:23:53.631309  766058 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1226 22:23:53.631320  766058 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1226 22:23:53.631328  766058 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1226 22:23:53.631338  766058 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1226 22:23:53.631344  766058 command_runner.go:130] > # signature_policy = ""
	I1226 22:23:53.631352  766058 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1226 22:23:53.631360  766058 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1226 22:23:53.631369  766058 command_runner.go:130] > # changing them here.
	I1226 22:23:53.631374  766058 command_runner.go:130] > # insecure_registries = [
	I1226 22:23:53.631379  766058 command_runner.go:130] > # ]
	I1226 22:23:53.631387  766058 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1226 22:23:53.631393  766058 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1226 22:23:53.631403  766058 command_runner.go:130] > # image_volumes = "mkdir"
	I1226 22:23:53.631410  766058 command_runner.go:130] > # Temporary directory to use for storing big files
	I1226 22:23:53.631420  766058 command_runner.go:130] > # big_files_temporary_dir = ""
	I1226 22:23:53.631428  766058 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1226 22:23:53.631436  766058 command_runner.go:130] > # CNI plugins.
	I1226 22:23:53.631441  766058 command_runner.go:130] > [crio.network]
	I1226 22:23:53.631450  766058 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1226 22:23:53.631458  766058 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1226 22:23:53.631463  766058 command_runner.go:130] > # cni_default_network = ""
	I1226 22:23:53.631470  766058 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1226 22:23:53.631476  766058 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1226 22:23:53.631486  766058 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1226 22:23:53.631491  766058 command_runner.go:130] > # plugin_dirs = [
	I1226 22:23:53.631501  766058 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1226 22:23:53.631505  766058 command_runner.go:130] > # ]
	I1226 22:23:53.631513  766058 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1226 22:23:53.631521  766058 command_runner.go:130] > [crio.metrics]
	I1226 22:23:53.631528  766058 command_runner.go:130] > # Globally enable or disable metrics support.
	I1226 22:23:53.631533  766058 command_runner.go:130] > # enable_metrics = false
	I1226 22:23:53.631539  766058 command_runner.go:130] > # Specify enabled metrics collectors.
	I1226 22:23:53.631545  766058 command_runner.go:130] > # Per default all metrics are enabled.
	I1226 22:23:53.631553  766058 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1226 22:23:53.631564  766058 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1226 22:23:53.631572  766058 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1226 22:23:53.631581  766058 command_runner.go:130] > # metrics_collectors = [
	I1226 22:23:53.631850  766058 command_runner.go:130] > # 	"operations",
	I1226 22:23:53.631864  766058 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1226 22:23:53.631870  766058 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1226 22:23:53.631876  766058 command_runner.go:130] > # 	"operations_errors",
	I1226 22:23:53.631881  766058 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1226 22:23:53.631887  766058 command_runner.go:130] > # 	"image_pulls_by_name",
	I1226 22:23:53.631905  766058 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1226 22:23:53.631911  766058 command_runner.go:130] > # 	"image_pulls_failures",
	I1226 22:23:53.631916  766058 command_runner.go:130] > # 	"image_pulls_successes",
	I1226 22:23:53.631922  766058 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1226 22:23:53.631926  766058 command_runner.go:130] > # 	"image_layer_reuse",
	I1226 22:23:53.631931  766058 command_runner.go:130] > # 	"containers_oom_total",
	I1226 22:23:53.631936  766058 command_runner.go:130] > # 	"containers_oom",
	I1226 22:23:53.631944  766058 command_runner.go:130] > # 	"processes_defunct",
	I1226 22:23:53.631949  766058 command_runner.go:130] > # 	"operations_total",
	I1226 22:23:53.631957  766058 command_runner.go:130] > # 	"operations_latency_seconds",
	I1226 22:23:53.631965  766058 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1226 22:23:53.631974  766058 command_runner.go:130] > # 	"operations_errors_total",
	I1226 22:23:53.631980  766058 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1226 22:23:53.631985  766058 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1226 22:23:53.631991  766058 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1226 22:23:53.631996  766058 command_runner.go:130] > # 	"image_pulls_success_total",
	I1226 22:23:53.632002  766058 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1226 22:23:53.632007  766058 command_runner.go:130] > # 	"containers_oom_count_total",
	I1226 22:23:53.632012  766058 command_runner.go:130] > # ]
	I1226 22:23:53.632018  766058 command_runner.go:130] > # The port on which the metrics server will listen.
	I1226 22:23:53.632033  766058 command_runner.go:130] > # metrics_port = 9090
	I1226 22:23:53.632040  766058 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1226 22:23:53.632045  766058 command_runner.go:130] > # metrics_socket = ""
	I1226 22:23:53.632054  766058 command_runner.go:130] > # The certificate for the secure metrics server.
	I1226 22:23:53.632062  766058 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1226 22:23:53.632073  766058 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1226 22:23:53.632079  766058 command_runner.go:130] > # certificate on any modification event.
	I1226 22:23:53.632084  766058 command_runner.go:130] > # metrics_cert = ""
	I1226 22:23:53.632112  766058 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1226 22:23:53.632123  766058 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1226 22:23:53.632128  766058 command_runner.go:130] > # metrics_key = ""
	I1226 22:23:53.632135  766058 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1226 22:23:53.632142  766058 command_runner.go:130] > [crio.tracing]
	I1226 22:23:53.632150  766058 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1226 22:23:53.632158  766058 command_runner.go:130] > # enable_tracing = false
	I1226 22:23:53.632165  766058 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1226 22:23:53.632171  766058 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1226 22:23:53.632179  766058 command_runner.go:130] > # Number of samples to collect per million spans.
	I1226 22:23:53.632187  766058 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1226 22:23:53.632195  766058 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1226 22:23:53.632200  766058 command_runner.go:130] > [crio.stats]
	I1226 22:23:53.632207  766058 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1226 22:23:53.632216  766058 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1226 22:23:53.632222  766058 command_runner.go:130] > # stats_collection_period = 0
	I1226 22:23:53.634215  766058 command_runner.go:130] ! time="2023-12-26 22:23:53.624279964Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1226 22:23:53.634258  766058 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1226 22:23:53.634317  766058 cni.go:84] Creating CNI manager for ""
	I1226 22:23:53.634329  766058 cni.go:136] 2 nodes found, recommending kindnet
	I1226 22:23:53.634338  766058 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1226 22:23:53.634358  766058 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-772557 NodeName:multinode-772557-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1226 22:23:53.634484  766058 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-772557-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1226 22:23:53.634540  766058 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-772557-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-772557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1226 22:23:53.634605  766058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1226 22:23:53.645132  766058 command_runner.go:130] > kubeadm
	I1226 22:23:53.645154  766058 command_runner.go:130] > kubectl
	I1226 22:23:53.645160  766058 command_runner.go:130] > kubelet
	I1226 22:23:53.646555  766058 binaries.go:44] Found k8s binaries, skipping transfer
	I1226 22:23:53.646629  766058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1226 22:23:53.658460  766058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1226 22:23:53.681605  766058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1226 22:23:53.704639  766058 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1226 22:23:53.709344  766058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1226 22:23:53.723085  766058 host.go:66] Checking if "multinode-772557" exists ...
	I1226 22:23:53.723372  766058 config.go:182] Loaded profile config "multinode-772557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:23:53.723371  766058 start.go:304] JoinCluster: &{Name:multinode-772557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-772557 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:23:53.723451  766058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1226 22:23:53.723517  766058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:23:53.742536  766058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33746 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa Username:docker}
	I1226 22:23:53.916932  766058 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token v0hwzu.x116czzg1qpubgtc --discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 
	I1226 22:23:53.916992  766058 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1226 22:23:53.917020  766058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v0hwzu.x116czzg1qpubgtc --discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-772557-m02"
	I1226 22:23:53.966168  766058 command_runner.go:130] > [preflight] Running pre-flight checks
	I1226 22:23:54.004701  766058 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1226 22:23:54.004734  766058 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I1226 22:23:54.004742  766058 command_runner.go:130] > OS: Linux
	I1226 22:23:54.004749  766058 command_runner.go:130] > CGROUPS_CPU: enabled
	I1226 22:23:54.004777  766058 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1226 22:23:54.004797  766058 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1226 22:23:54.004807  766058 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1226 22:23:54.004814  766058 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1226 22:23:54.004826  766058 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1226 22:23:54.004836  766058 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1226 22:23:54.004846  766058 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1226 22:23:54.004853  766058 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1226 22:23:54.125263  766058 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1226 22:23:54.125293  766058 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1226 22:23:54.165955  766058 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1226 22:23:54.166309  766058 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1226 22:23:54.166501  766058 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1226 22:23:54.280169  766058 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1226 22:23:57.295078  766058 command_runner.go:130] > This node has joined the cluster:
	I1226 22:23:57.295156  766058 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1226 22:23:57.295175  766058 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1226 22:23:57.295187  766058 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1226 22:23:57.298410  766058 command_runner.go:130] ! W1226 22:23:53.965589    1031 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1226 22:23:57.298443  766058 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1226 22:23:57.298456  766058 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1226 22:23:57.298475  766058 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v0hwzu.x116czzg1qpubgtc --discovery-token-ca-cert-hash sha256:eb99bb3bfadffb0fd08fb657f91b723758be2a0ceabacd68fe612edc25108351 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-772557-m02": (3.381442922s)
	I1226 22:23:57.298494  766058 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1226 22:23:57.562624  766058 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1226 22:23:57.562710  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b minikube.k8s.io/name=multinode-772557 minikube.k8s.io/updated_at=2023_12_26T22_23_57_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1226 22:23:57.668818  766058 command_runner.go:130] > node/multinode-772557-m02 labeled
	I1226 22:23:57.673311  766058 start.go:306] JoinCluster complete in 3.949933836s
	I1226 22:23:57.673340  766058 cni.go:84] Creating CNI manager for ""
	I1226 22:23:57.673347  766058 cni.go:136] 2 nodes found, recommending kindnet
	I1226 22:23:57.673399  766058 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1226 22:23:57.681580  766058 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1226 22:23:57.681605  766058 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I1226 22:23:57.681639  766058 command_runner.go:130] > Device: 36h/54d	Inode: 1306506     Links: 1
	I1226 22:23:57.681648  766058 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1226 22:23:57.681656  766058 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I1226 22:23:57.681662  766058 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I1226 22:23:57.681669  766058 command_runner.go:130] > Change: 2023-12-26 21:45:19.091346626 +0000
	I1226 22:23:57.681675  766058 command_runner.go:130] >  Birth: 2023-12-26 21:45:19.047347634 +0000
	I1226 22:23:57.681716  766058 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1226 22:23:57.681725  766058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1226 22:23:57.720169  766058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1226 22:23:58.121552  766058 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1226 22:23:58.128091  766058 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1226 22:23:58.131484  766058 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1226 22:23:58.148657  766058 command_runner.go:130] > daemonset.apps/kindnet configured
	I1226 22:23:58.155967  766058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:23:58.156232  766058 kapi.go:59] client config for multinode-772557: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:23:58.156577  766058 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1226 22:23:58.156586  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:58.156596  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:58.156605  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:58.160757  766058 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1226 22:23:58.160784  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:58.160792  766058 round_trippers.go:580]     Content-Length: 291
	I1226 22:23:58.160799  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:58 GMT
	I1226 22:23:58.160806  766058 round_trippers.go:580]     Audit-Id: beea30cc-42ef-4ace-a6ff-d0e5a2c26d1a
	I1226 22:23:58.160812  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:58.160819  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:58.160825  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:58.160836  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:58.160995  766058 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7703adf0-ff18-499b-9077-c17b95400379","resourceVersion":"446","creationTimestamp":"2023-12-26T22:22:53Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1226 22:23:58.161090  766058 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-772557" context rescaled to 1 replicas
	I1226 22:23:58.161117  766058 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1226 22:23:58.164930  766058 out.go:177] * Verifying Kubernetes components...
	I1226 22:23:58.166891  766058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:23:58.186311  766058 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:23:58.186585  766058 kapi.go:59] client config for multinode-772557: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.crt", KeyFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/profiles/multinode-772557/client.key", CAFile:"/home/jenkins/minikube-integration/17857-697646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bfa00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1226 22:23:58.186867  766058 node_ready.go:35] waiting up to 6m0s for node "multinode-772557-m02" to be "Ready" ...
	I1226 22:23:58.186952  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:23:58.186961  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:58.186971  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:58.186977  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:58.189760  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:58.189786  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:58.189795  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:58.189802  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:58.189808  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:58.189814  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:58.189828  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:58 GMT
	I1226 22:23:58.189837  766058 round_trippers.go:580]     Audit-Id: c6b59739-d664-43af-9393-d3373c303bfb
	I1226 22:23:58.190398  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"487","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1226 22:23:58.687082  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:23:58.687111  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:58.687122  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:58.687129  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:58.689835  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:58.689856  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:58.689865  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:58 GMT
	I1226 22:23:58.689871  766058 round_trippers.go:580]     Audit-Id: 7466b844-63f9-4061-9f8c-66f51d73e185
	I1226 22:23:58.689877  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:58.689884  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:58.689890  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:58.689896  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:58.690084  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"487","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1226 22:23:59.187310  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:23:59.187335  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:59.187346  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:59.187353  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:59.189996  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:23:59.190020  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:59.190029  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:59.190035  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:59.190041  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:59.190048  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:59.190055  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:59 GMT
	I1226 22:23:59.190061  766058 round_trippers.go:580]     Audit-Id: 09838813-d064-4e76-b23c-cafc1ea383e1
	I1226 22:23:59.190216  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"487","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1226 22:23:59.687142  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:23:59.687165  766058 round_trippers.go:469] Request Headers:
	I1226 22:23:59.687176  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:23:59.687183  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:23:59.694441  766058 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1226 22:23:59.694464  766058 round_trippers.go:577] Response Headers:
	I1226 22:23:59.694472  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:23:59 GMT
	I1226 22:23:59.694479  766058 round_trippers.go:580]     Audit-Id: 56531034-1be6-4f5f-b4ac-d5754776b52d
	I1226 22:23:59.694490  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:23:59.694497  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:23:59.694503  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:23:59.694509  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:23:59.695017  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"487","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1226 22:24:00.204921  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:00.204943  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:00.204954  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:00.204962  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:00.211263  766058 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1226 22:24:00.211307  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:00.211324  766058 round_trippers.go:580]     Audit-Id: de9560d5-424c-4f62-9b81-54c4c6ac3907
	I1226 22:24:00.211331  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:00.211338  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:00.211346  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:00.211355  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:00.211368  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:00 GMT
	I1226 22:24:00.211564  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"487","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1226 22:24:00.211990  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:00.687042  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:00.687067  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:00.687076  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:00.687084  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:00.689582  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:00.689603  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:00.689612  766058 round_trippers.go:580]     Audit-Id: 5bd6d6c3-8fd3-4ca7-b2c5-cf0d6153f6e6
	I1226 22:24:00.689618  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:00.689625  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:00.689631  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:00.689637  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:00.689644  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:00 GMT
	I1226 22:24:00.689764  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"487","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1226 22:24:01.187154  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:01.187182  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:01.187193  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:01.187202  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:01.189765  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:01.189800  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:01.189810  766058 round_trippers.go:580]     Audit-Id: bbaa0e04-6c83-4271-bf60-e30407655c2c
	I1226 22:24:01.189817  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:01.189824  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:01.189830  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:01.189836  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:01.189842  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:01 GMT
	I1226 22:24:01.189991  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:01.687065  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:01.687103  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:01.687115  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:01.687122  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:01.689563  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:01.689589  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:01.689597  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:01.689603  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:01.689609  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:01.689616  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:01 GMT
	I1226 22:24:01.689623  766058 round_trippers.go:580]     Audit-Id: 8758c14e-ac46-444d-8fdf-69e70f44028e
	I1226 22:24:01.689630  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:01.689757  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:02.188114  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:02.188145  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:02.188157  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:02.188164  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:02.190770  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:02.190796  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:02.190805  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:02.190812  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:02 GMT
	I1226 22:24:02.190818  766058 round_trippers.go:580]     Audit-Id: d85acf1d-38bd-4cbb-901d-db7de245f16d
	I1226 22:24:02.190824  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:02.190830  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:02.190836  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:02.190972  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:02.687739  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:02.687766  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:02.687776  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:02.687783  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:02.690423  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:02.690452  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:02.690461  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:02.690468  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:02 GMT
	I1226 22:24:02.690474  766058 round_trippers.go:580]     Audit-Id: fc7c721c-52bc-452d-a8c7-b77bc6559403
	I1226 22:24:02.690480  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:02.690486  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:02.690493  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:02.690623  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:02.691002  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:03.187756  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:03.187782  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:03.187792  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:03.187799  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:03.190344  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:03.190365  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:03.190373  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:03.190380  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:03.190386  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:03.190392  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:03 GMT
	I1226 22:24:03.190398  766058 round_trippers.go:580]     Audit-Id: 6e63765f-9bf4-4ea8-bd65-bc3cc753551b
	I1226 22:24:03.190405  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:03.190536  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:03.687682  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:03.687707  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:03.687718  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:03.687726  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:03.690289  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:03.690319  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:03.690328  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:03 GMT
	I1226 22:24:03.690334  766058 round_trippers.go:580]     Audit-Id: a66088db-67e1-4b19-9e0c-5ef750a73df6
	I1226 22:24:03.690343  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:03.690350  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:03.690356  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:03.690362  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:03.690486  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:04.187330  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:04.187356  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:04.187366  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:04.187373  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:04.190249  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:04.190276  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:04.190284  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:04.190292  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:04 GMT
	I1226 22:24:04.190300  766058 round_trippers.go:580]     Audit-Id: fb69fc57-2877-47c0-b159-ce952e3cc4ce
	I1226 22:24:04.190307  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:04.190313  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:04.190319  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:04.190460  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:04.687155  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:04.687182  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:04.687192  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:04.687200  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:04.689666  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:04.689695  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:04.689704  766058 round_trippers.go:580]     Audit-Id: e655624c-7f8f-435d-9cf9-1b83f3d19301
	I1226 22:24:04.689711  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:04.689717  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:04.689723  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:04.689737  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:04.689744  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:04 GMT
	I1226 22:24:04.690063  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:05.187796  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:05.187825  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:05.187845  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:05.187853  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:05.190692  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:05.190723  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:05.190743  766058 round_trippers.go:580]     Audit-Id: 9bb4d3f3-fcf7-41a2-9660-9654f9612f01
	I1226 22:24:05.190756  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:05.190762  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:05.190774  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:05.190781  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:05.190792  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:05 GMT
	I1226 22:24:05.190994  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:05.191540  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:05.687207  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:05.687230  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:05.687241  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:05.687249  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:05.689800  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:05.689823  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:05.689833  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:05.689840  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:05.689846  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:05.689852  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:05 GMT
	I1226 22:24:05.689859  766058 round_trippers.go:580]     Audit-Id: 5a0f25cf-53b0-4b4c-9c13-7a52c5515f8d
	I1226 22:24:05.689864  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:05.690028  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:06.187270  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:06.187312  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:06.187322  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:06.187330  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:06.189762  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:06.189782  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:06.189790  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:06.189796  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:06.189803  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:06.189812  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:06.189818  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:06 GMT
	I1226 22:24:06.189824  766058 round_trippers.go:580]     Audit-Id: 5fea6742-9561-4cba-a2ee-b4bc1d394259
	I1226 22:24:06.189970  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:06.687332  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:06.687361  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:06.687371  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:06.687378  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:06.689926  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:06.689950  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:06.689958  766058 round_trippers.go:580]     Audit-Id: 3c891880-a311-4618-91c8-96e93e0ea99f
	I1226 22:24:06.689965  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:06.689971  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:06.689977  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:06.689984  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:06.689990  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:06 GMT
	I1226 22:24:06.690122  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"502","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5844 chars]
	I1226 22:24:07.187963  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:07.187986  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:07.187997  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:07.188004  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:07.190606  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:07.190632  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:07.190641  766058 round_trippers.go:580]     Audit-Id: 7dceed28-b083-4a27-89ff-e138f1a4aa4a
	I1226 22:24:07.190648  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:07.190654  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:07.190660  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:07.190666  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:07.190673  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:07 GMT
	I1226 22:24:07.190814  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:07.687155  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:07.687178  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:07.687189  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:07.687207  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:07.689923  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:07.689967  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:07.689977  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:07 GMT
	I1226 22:24:07.689984  766058 round_trippers.go:580]     Audit-Id: 93a249ca-58f1-48e0-b6ef-06dcbfd48888
	I1226 22:24:07.689990  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:07.689997  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:07.690007  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:07.690013  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:07.690181  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:07.690733  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:08.187430  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:08.187457  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:08.187477  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:08.187504  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:08.190068  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:08.190092  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:08.190101  766058 round_trippers.go:580]     Audit-Id: ab4df940-0f0c-483c-8873-b0f311a999e0
	I1226 22:24:08.190107  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:08.190131  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:08.190140  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:08.190147  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:08.190153  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:08 GMT
	I1226 22:24:08.190499  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:08.687121  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:08.687148  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:08.687158  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:08.687166  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:08.689639  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:08.689659  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:08.689667  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:08.689673  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:08.689680  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:08.689686  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:08 GMT
	I1226 22:24:08.689692  766058 round_trippers.go:580]     Audit-Id: d69d7b26-2de6-46bc-8bfb-ad7c453a858a
	I1226 22:24:08.689699  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:08.689855  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:09.187105  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:09.187124  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:09.187134  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:09.187140  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:09.189716  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:09.189738  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:09.189746  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:09.189753  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:09.189759  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:09.189775  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:09.189783  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:09 GMT
	I1226 22:24:09.189789  766058 round_trippers.go:580]     Audit-Id: f2e7c912-6819-48fa-bc96-a5843102f700
	I1226 22:24:09.189933  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:09.687062  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:09.687085  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:09.687100  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:09.687108  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:09.689662  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:09.689694  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:09.689723  766058 round_trippers.go:580]     Audit-Id: f0bc482a-5052-4063-ae33-f6a6bb09d186
	I1226 22:24:09.689732  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:09.689742  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:09.689748  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:09.689757  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:09.689764  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:09 GMT
	I1226 22:24:09.690092  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:10.187322  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:10.187344  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:10.187356  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:10.187363  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:10.190002  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:10.190028  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:10.190038  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:10.190045  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:10.190052  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:10 GMT
	I1226 22:24:10.190060  766058 round_trippers.go:580]     Audit-Id: c1178e7e-964e-4483-8c1c-ca2672d5e2bf
	I1226 22:24:10.190069  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:10.190078  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:10.190311  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:10.190726  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:10.687149  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:10.687172  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:10.687183  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:10.687191  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:10.689815  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:10.689838  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:10.689846  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:10.689852  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:10.689859  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:10 GMT
	I1226 22:24:10.689865  766058 round_trippers.go:580]     Audit-Id: 48152b74-1e23-4e27-abad-59a19c30af4a
	I1226 22:24:10.689871  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:10.689878  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:10.690053  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:11.187167  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:11.187193  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:11.187203  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:11.187211  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:11.190025  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:11.190064  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:11.190076  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:11.190084  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:11 GMT
	I1226 22:24:11.190093  766058 round_trippers.go:580]     Audit-Id: ee5d9f65-7ee2-4672-ab8b-2d0dd2241211
	I1226 22:24:11.190112  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:11.190119  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:11.190125  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:11.190387  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:11.687395  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:11.687448  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:11.687459  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:11.687468  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:11.690169  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:11.690206  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:11.690215  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:11 GMT
	I1226 22:24:11.690223  766058 round_trippers.go:580]     Audit-Id: 180fe3b1-04b7-4307-8a5b-b4316aabbee2
	I1226 22:24:11.690229  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:11.690236  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:11.690247  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:11.690253  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:11.690817  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:12.188072  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:12.188100  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:12.188110  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:12.188119  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:12.190794  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:12.190822  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:12.190831  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:12 GMT
	I1226 22:24:12.190838  766058 round_trippers.go:580]     Audit-Id: 7a692a08-bd73-4f8b-ac50-e5ebeb912fe5
	I1226 22:24:12.190844  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:12.190859  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:12.190874  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:12.190881  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:12.191236  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:12.191651  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:12.687859  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:12.687884  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:12.687895  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:12.687902  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:12.690809  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:12.690841  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:12.690850  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:12.690857  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:12.690863  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:12 GMT
	I1226 22:24:12.690870  766058 round_trippers.go:580]     Audit-Id: 4bf7f0a8-985a-42c1-a1a7-b7c70bd826d5
	I1226 22:24:12.690876  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:12.690882  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:12.691079  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:13.187832  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:13.187855  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:13.187865  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:13.187873  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:13.190378  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:13.190444  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:13.190481  766058 round_trippers.go:580]     Audit-Id: 49d0ca7c-be8f-4cdc-8405-9fd611ef8d7b
	I1226 22:24:13.190507  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:13.190527  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:13.190562  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:13.190587  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:13.190612  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:13 GMT
	I1226 22:24:13.190780  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:13.687116  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:13.687142  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:13.687153  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:13.687160  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:13.689781  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:13.689813  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:13.689822  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:13.689830  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:13.689836  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:13 GMT
	I1226 22:24:13.689843  766058 round_trippers.go:580]     Audit-Id: a06bd670-23e0-40f8-9ed8-fdde5f50f1a4
	I1226 22:24:13.689849  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:13.689860  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:13.689993  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:14.187128  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:14.187154  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:14.187164  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:14.187171  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:14.190737  766058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:24:14.190761  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:14.190769  766058 round_trippers.go:580]     Audit-Id: f41400d4-eccd-4022-9ada-7d5d3586ee5a
	I1226 22:24:14.190775  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:14.190781  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:14.190788  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:14.190794  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:14.190800  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:14 GMT
	I1226 22:24:14.190942  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:14.687055  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:14.687080  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:14.687095  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:14.687103  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:14.689778  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:14.689839  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:14.689854  766058 round_trippers.go:580]     Audit-Id: 35b67fee-769c-4282-9e4a-d07f2a7e3d5b
	I1226 22:24:14.689864  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:14.689870  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:14.689876  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:14.689883  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:14.689892  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:14 GMT
	I1226 22:24:14.690018  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:14.690410  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:15.187165  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:15.187189  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:15.187200  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:15.187207  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:15.189965  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:15.189993  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:15.190003  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:15 GMT
	I1226 22:24:15.190011  766058 round_trippers.go:580]     Audit-Id: dd5c203c-36ba-4c0e-b4dc-5b90b12e06af
	I1226 22:24:15.190017  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:15.190024  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:15.190030  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:15.190037  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:15.190196  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:15.687998  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:15.688029  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:15.688040  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:15.688052  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:15.690699  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:15.690729  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:15.690738  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:15.690745  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:15.690752  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:15.690759  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:15.690766  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:15 GMT
	I1226 22:24:15.690781  766058 round_trippers.go:580]     Audit-Id: 556cff01-e7c6-463f-be98-670dc6343592
	I1226 22:24:15.691138  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:16.187800  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:16.187828  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:16.187838  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:16.187846  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:16.190233  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:16.190258  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:16.190267  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:16.190274  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:16.190280  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:16.190287  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:16.190300  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:16 GMT
	I1226 22:24:16.190306  766058 round_trippers.go:580]     Audit-Id: eddda6dd-ef09-4607-80c0-2487ba1fef03
	I1226 22:24:16.190705  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:16.687854  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:16.687880  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:16.687891  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:16.687898  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:16.690543  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:16.690564  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:16.690572  766058 round_trippers.go:580]     Audit-Id: 49a06a32-1ec5-4dfe-a85b-5042b07a85f2
	I1226 22:24:16.690579  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:16.690586  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:16.690592  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:16.690601  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:16.690608  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:16 GMT
	I1226 22:24:16.690792  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:16.691217  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:17.187429  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:17.187493  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:17.187527  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:17.187548  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:17.190193  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:17.190215  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:17.190223  766058 round_trippers.go:580]     Audit-Id: 70842764-a9ca-4a95-a8b4-088033745099
	I1226 22:24:17.190230  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:17.190236  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:17.190243  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:17.190249  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:17.190255  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:17 GMT
	I1226 22:24:17.190365  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:17.687610  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:17.687634  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:17.687644  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:17.687652  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:17.690195  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:17.690222  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:17.690231  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:17.690238  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:17.690244  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:17.690250  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:17 GMT
	I1226 22:24:17.690256  766058 round_trippers.go:580]     Audit-Id: a562890a-00d9-4b61-b11e-01c4f9b32365
	I1226 22:24:17.690263  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:17.690382  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:18.187159  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:18.187186  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:18.187198  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:18.187205  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:18.189729  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:18.189756  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:18.189764  766058 round_trippers.go:580]     Audit-Id: 970fe12a-88d1-4c95-a2fc-eb17801206af
	I1226 22:24:18.189770  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:18.189776  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:18.189783  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:18.189789  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:18.189796  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:18 GMT
	I1226 22:24:18.189972  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:18.688057  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:18.688083  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:18.688093  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:18.688101  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:18.690687  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:18.690714  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:18.690723  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:18.690730  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:18.690736  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:18 GMT
	I1226 22:24:18.690743  766058 round_trippers.go:580]     Audit-Id: ef629097-c62e-4939-a75f-a7c0bc0d590d
	I1226 22:24:18.690750  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:18.690756  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:18.690877  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:18.691346  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:19.187059  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:19.187109  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:19.187120  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:19.187129  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:19.189670  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:19.189691  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:19.189699  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:19.189706  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:19.189714  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:19.189720  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:19.189727  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:19 GMT
	I1226 22:24:19.189733  766058 round_trippers.go:580]     Audit-Id: 967b1c3e-df33-4bcd-b5a2-a958c5808037
	I1226 22:24:19.189863  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:19.687174  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:19.687200  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:19.687211  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:19.687218  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:19.689822  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:19.689846  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:19.689855  766058 round_trippers.go:580]     Audit-Id: 843bb74b-e739-4f02-94d0-ad0025d581ad
	I1226 22:24:19.689861  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:19.689867  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:19.689873  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:19.689879  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:19.689886  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:19 GMT
	I1226 22:24:19.690012  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:20.187863  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:20.187891  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:20.187902  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:20.187909  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:20.190801  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:20.190829  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:20.190838  766058 round_trippers.go:580]     Audit-Id: b9a1327f-bf6a-4ddc-95fd-a8e2d8519005
	I1226 22:24:20.190845  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:20.190851  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:20.190858  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:20.190864  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:20.190871  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:20 GMT
	I1226 22:24:20.191028  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:20.687922  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:20.687950  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:20.687961  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:20.687969  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:20.690566  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:20.690595  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:20.690604  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:20.690615  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:20 GMT
	I1226 22:24:20.690622  766058 round_trippers.go:580]     Audit-Id: d66c2a6c-c35c-4392-85ae-9ffca6e8aeba
	I1226 22:24:20.690628  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:20.690634  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:20.690641  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:20.690758  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:21.187701  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:21.187726  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:21.187736  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:21.187744  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:21.190228  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:21.190252  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:21.190260  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:21.190268  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:21.190274  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:21.190282  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:21 GMT
	I1226 22:24:21.190288  766058 round_trippers.go:580]     Audit-Id: 886b2659-bd5e-4e2f-b547-47f48e57c440
	I1226 22:24:21.190294  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:21.190432  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:21.190823  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:21.687174  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:21.687199  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:21.687210  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:21.687217  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:21.689816  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:21.689840  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:21.689849  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:21 GMT
	I1226 22:24:21.689858  766058 round_trippers.go:580]     Audit-Id: c3c05187-34fc-49e5-bfee-7aeb6fa62a72
	I1226 22:24:21.689864  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:21.689870  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:21.689877  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:21.689888  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:21.690041  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:22.187708  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:22.187735  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:22.187746  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:22.187754  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:22.190278  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:22.190307  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:22.190315  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:22 GMT
	I1226 22:24:22.190322  766058 round_trippers.go:580]     Audit-Id: 17d430da-c2ff-47b4-9009-f49e715b9967
	I1226 22:24:22.190328  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:22.190335  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:22.190341  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:22.190347  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:22.190510  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:22.687710  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:22.687746  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:22.687756  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:22.687763  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:22.690332  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:22.690361  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:22.690370  766058 round_trippers.go:580]     Audit-Id: e07463c5-ed8e-41fd-b911-90b56b74d8db
	I1226 22:24:22.690378  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:22.690385  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:22.690396  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:22.690403  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:22.690409  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:22 GMT
	I1226 22:24:22.690749  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:23.187143  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:23.187167  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:23.187178  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:23.187186  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:23.189781  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:23.189805  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:23.189815  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:23.189823  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:23 GMT
	I1226 22:24:23.189830  766058 round_trippers.go:580]     Audit-Id: afc43920-bb21-499d-a2d7-e46e0f2e0385
	I1226 22:24:23.189836  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:23.189845  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:23.189857  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:23.190026  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:23.687140  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:23.687161  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:23.687171  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:23.687179  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:23.689745  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:23.689773  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:23.689782  766058 round_trippers.go:580]     Audit-Id: 08a00724-d32d-4470-aec4-edd18341c06a
	I1226 22:24:23.689788  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:23.689794  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:23.689800  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:23.689807  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:23.689817  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:23 GMT
	I1226 22:24:23.690010  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:23.690420  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:24.187080  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:24.187126  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:24.187136  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:24.187144  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:24.189866  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:24.189892  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:24.189901  766058 round_trippers.go:580]     Audit-Id: a46cacfe-1f9e-4a27-95bc-202332f8269d
	I1226 22:24:24.189907  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:24.189913  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:24.189920  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:24.189935  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:24.189941  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:24 GMT
	I1226 22:24:24.190158  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:24.687251  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:24.687276  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:24.687286  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:24.687293  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:24.689973  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:24.690007  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:24.690016  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:24.690023  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:24 GMT
	I1226 22:24:24.690029  766058 round_trippers.go:580]     Audit-Id: 64d40e06-ad26-4f74-b29b-8139821818a9
	I1226 22:24:24.690035  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:24.690041  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:24.690047  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:24.690172  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:25.187340  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:25.187368  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:25.187379  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:25.187387  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:25.190155  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:25.190182  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:25.190192  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:25.190198  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:25.190205  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:25.190211  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:25 GMT
	I1226 22:24:25.190218  766058 round_trippers.go:580]     Audit-Id: 32854e98-6e0f-4466-9e56-f87cdb58e94a
	I1226 22:24:25.190225  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:25.190456  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:25.687143  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:25.687168  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:25.687178  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:25.687186  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:25.689752  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:25.689841  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:25.689873  766058 round_trippers.go:580]     Audit-Id: 5d47f10c-cc77-48d2-8b49-e40f1d614400
	I1226 22:24:25.689894  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:25.689907  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:25.689915  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:25.689921  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:25.689931  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:25 GMT
	I1226 22:24:25.690075  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:25.690500  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:26.187139  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:26.187174  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:26.187184  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:26.187191  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:26.189860  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:26.189885  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:26.189893  766058 round_trippers.go:580]     Audit-Id: 167be317-fc01-43d9-89bc-a776e8e0976c
	I1226 22:24:26.189900  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:26.189906  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:26.189913  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:26.189919  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:26.189926  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:26 GMT
	I1226 22:24:26.190049  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:26.687888  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:26.687917  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:26.687927  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:26.687935  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:26.690599  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:26.690623  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:26.690631  766058 round_trippers.go:580]     Audit-Id: 0efc5551-1d61-4009-801a-5d1468a750c1
	I1226 22:24:26.690638  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:26.690644  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:26.690652  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:26.690659  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:26.690666  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:26 GMT
	I1226 22:24:26.690794  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:27.187609  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:27.187636  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:27.187646  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:27.187668  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:27.190291  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:27.190317  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:27.190326  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:27 GMT
	I1226 22:24:27.190337  766058 round_trippers.go:580]     Audit-Id: 1ee3df8b-b3f9-462e-880d-d3084ac5022e
	I1226 22:24:27.190344  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:27.190351  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:27.190366  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:27.190374  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:27.190516  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:27.687793  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:27.687820  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:27.687833  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:27.687841  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:27.690343  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:27.690369  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:27.690378  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:27.690384  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:27 GMT
	I1226 22:24:27.690391  766058 round_trippers.go:580]     Audit-Id: 266be27c-2a4d-4e2b-b5dd-52f881ccb045
	I1226 22:24:27.690397  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:27.690404  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:27.690414  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:27.690520  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:27.690911  766058 node_ready.go:58] node "multinode-772557-m02" has status "Ready":"False"
	I1226 22:24:28.187331  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:28.187353  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.187364  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.187372  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.189895  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.189920  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.189929  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.189936  766058 round_trippers.go:580]     Audit-Id: 530b9cef-c301-46fc-a1ba-590b25c5c9ce
	I1226 22:24:28.189964  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.189978  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.189986  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.189997  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.190136  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"509","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 6113 chars]
	I1226 22:24:28.687609  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:28.687642  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.687652  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.687659  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.690631  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.690663  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.690676  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.690687  766058 round_trippers.go:580]     Audit-Id: e27821a7-313d-420f-82dc-a6cf4e7c67d8
	I1226 22:24:28.690694  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.690705  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.690714  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.690721  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.691134  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"531","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5930 chars]
	I1226 22:24:28.691545  766058 node_ready.go:49] node "multinode-772557-m02" has status "Ready":"True"
	I1226 22:24:28.691562  766058 node_ready.go:38] duration metric: took 30.504679049s waiting for node "multinode-772557-m02" to be "Ready" ...
	I1226 22:24:28.691573  766058 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:24:28.691635  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1226 22:24:28.691646  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.691654  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.691661  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.695341  766058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:24:28.695365  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.695374  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.695381  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.695388  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.695394  766058 round_trippers.go:580]     Audit-Id: 4cfd0778-c540-4de1-be85-ec3912d136fe
	I1226 22:24:28.695401  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.695410  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.696309  766058 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"531"},"items":[{"metadata":{"name":"coredns-5dd5756b68-k29sm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"931cdf23-56fe-45a4-afb5-7d30cf6c7d97","resourceVersion":"442","creationTimestamp":"2023-12-26T22:23:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"86d64134-44b9-4f35-8c5d-6492f5e0552e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d64134-44b9-4f35-8c5d-6492f5e0552e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1226 22:24:28.699220  766058 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-k29sm" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:28.699323  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-k29sm
	I1226 22:24:28.699333  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.699343  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.699350  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.702074  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.702099  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.702109  766058 round_trippers.go:580]     Audit-Id: c4bd1be5-286e-467d-a142-15844d91ec2f
	I1226 22:24:28.702116  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.702122  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.702129  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.702140  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.702149  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.702249  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-k29sm","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"931cdf23-56fe-45a4-afb5-7d30cf6c7d97","resourceVersion":"442","creationTimestamp":"2023-12-26T22:23:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"86d64134-44b9-4f35-8c5d-6492f5e0552e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86d64134-44b9-4f35-8c5d-6492f5e0552e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1226 22:24:28.702764  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:24:28.702786  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.702794  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.702801  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.705333  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.705374  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.705383  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.705389  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.705395  766058 round_trippers.go:580]     Audit-Id: d850a8dc-bdcf-460e-a9a1-63800567658b
	I1226 22:24:28.705402  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.705408  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.705417  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.705542  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:24:28.705939  766058 pod_ready.go:92] pod "coredns-5dd5756b68-k29sm" in "kube-system" namespace has status "Ready":"True"
	I1226 22:24:28.705956  766058 pod_ready.go:81] duration metric: took 6.70543ms waiting for pod "coredns-5dd5756b68-k29sm" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:28.705968  766058 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:28.706027  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-772557
	I1226 22:24:28.706036  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.706044  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.706051  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.708465  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.708484  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.708492  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.708498  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.708504  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.708510  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.708535  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.708543  766058 round_trippers.go:580]     Audit-Id: b64ba82e-9641-4162-a46a-9db8b7a30ad3
	I1226 22:24:28.708842  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-772557","namespace":"kube-system","uid":"f03b0f35-667b-4397-8661-975404c492e6","resourceVersion":"314","creationTimestamp":"2023-12-26T22:22:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"e0cc8d87347d790eb697a7e6691995d5","kubernetes.io/config.mirror":"e0cc8d87347d790eb697a7e6691995d5","kubernetes.io/config.seen":"2023-12-26T22:22:53.416330825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1226 22:24:28.709288  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:24:28.709302  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.709311  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.709318  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.711629  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.711659  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.711668  766058 round_trippers.go:580]     Audit-Id: 2b0d67d0-de47-422d-a87f-5126556a3e2c
	I1226 22:24:28.711674  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.711681  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.711689  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.711702  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.711709  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.711851  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:24:28.712291  766058 pod_ready.go:92] pod "etcd-multinode-772557" in "kube-system" namespace has status "Ready":"True"
	I1226 22:24:28.712309  766058 pod_ready.go:81] duration metric: took 6.3319ms waiting for pod "etcd-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:28.712334  766058 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:28.712401  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-772557
	I1226 22:24:28.712411  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.712419  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.712426  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.714844  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.714865  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.714877  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.714884  766058 round_trippers.go:580]     Audit-Id: 39fd6ae1-56bc-4fd1-b34f-3650b2ff1a63
	I1226 22:24:28.714890  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.714901  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.714921  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.714927  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.715296  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-772557","namespace":"kube-system","uid":"afac54c2-df76-44f8-84ea-d9fd949afd91","resourceVersion":"294","creationTimestamp":"2023-12-26T22:22:53Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"c83f6e10207e0ba7cd7c29439b906882","kubernetes.io/config.mirror":"c83f6e10207e0ba7cd7c29439b906882","kubernetes.io/config.seen":"2023-12-26T22:22:53.416321636Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1226 22:24:28.715885  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:24:28.715900  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.715910  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.715917  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.718416  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.718458  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.718467  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.718474  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.718480  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.718487  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.718493  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.718502  766058 round_trippers.go:580]     Audit-Id: d72d975c-00f9-4e8a-aaf1-b2397bb2ff06
	I1226 22:24:28.718623  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:24:28.719024  766058 pod_ready.go:92] pod "kube-apiserver-multinode-772557" in "kube-system" namespace has status "Ready":"True"
	I1226 22:24:28.719040  766058 pod_ready.go:81] duration metric: took 6.69043ms waiting for pod "kube-apiserver-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:28.719051  766058 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:28.719125  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-772557
	I1226 22:24:28.719135  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.719143  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.719150  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.721943  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.721969  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.721986  766058 round_trippers.go:580]     Audit-Id: 1e8cee31-bf54-4406-bb30-30ebb1ab388b
	I1226 22:24:28.721994  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.722004  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.722019  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.722029  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.722035  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.722206  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-772557","namespace":"kube-system","uid":"40cdd4d3-8f44-4eba-8df7-904793fc4571","resourceVersion":"291","creationTimestamp":"2023-12-26T22:22:53Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2793c76a383a29211965eb883d37c03","kubernetes.io/config.mirror":"a2793c76a383a29211965eb883d37c03","kubernetes.io/config.seen":"2023-12-26T22:22:45.039269387Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1226 22:24:28.722858  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:24:28.722889  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.722899  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.722909  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.725471  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.725536  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.725559  766058 round_trippers.go:580]     Audit-Id: dddc0563-d32b-4df3-8345-7d920dc12996
	I1226 22:24:28.725585  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.725622  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.725636  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.725644  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.725651  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.725767  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:24:28.726174  766058 pod_ready.go:92] pod "kube-controller-manager-multinode-772557" in "kube-system" namespace has status "Ready":"True"
	I1226 22:24:28.726193  766058 pod_ready.go:81] duration metric: took 7.1344ms waiting for pod "kube-controller-manager-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:28.726206  766058 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q2rbf" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:28.888581  766058 request.go:629] Waited for 162.294385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2rbf
	I1226 22:24:28.888645  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q2rbf
	I1226 22:24:28.888654  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:28.888663  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:28.888673  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:28.891155  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:28.891194  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:28.891202  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:28 GMT
	I1226 22:24:28.891208  766058 round_trippers.go:580]     Audit-Id: 493c0cd9-3647-4a0d-84ff-5b84b4f2e8b2
	I1226 22:24:28.891214  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:28.891221  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:28.891227  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:28.891233  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:28.891362  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q2rbf","generateName":"kube-proxy-","namespace":"kube-system","uid":"4ef274a5-a036-4559-babc-232be6318956","resourceVersion":"400","creationTimestamp":"2023-12-26T22:23:06Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f2bb5d32-e46e-4c09-914a-6e81f727613f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2bb5d32-e46e-4c09-914a-6e81f727613f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1226 22:24:29.088204  766058 request.go:629] Waited for 196.356101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:24:29.088284  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:24:29.088293  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:29.088309  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:29.088317  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:29.090862  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:29.090890  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:29.090899  766058 round_trippers.go:580]     Audit-Id: b709456f-9a53-4dc7-ab07-1cac3bcea3d3
	I1226 22:24:29.090905  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:29.090912  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:29.090918  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:29.090938  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:29.090952  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:29 GMT
	I1226 22:24:29.091080  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:24:29.091486  766058 pod_ready.go:92] pod "kube-proxy-q2rbf" in "kube-system" namespace has status "Ready":"True"
	I1226 22:24:29.091504  766058 pod_ready.go:81] duration metric: took 365.290063ms waiting for pod "kube-proxy-q2rbf" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:29.091516  766058 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wm58w" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:29.288278  766058 request.go:629] Waited for 196.680541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wm58w
	I1226 22:24:29.288356  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wm58w
	I1226 22:24:29.288367  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:29.288376  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:29.288386  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:29.290846  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:29.290903  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:29.290912  766058 round_trippers.go:580]     Audit-Id: c5dacb16-cf3f-48a7-a121-6cd22c4521df
	I1226 22:24:29.290923  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:29.290929  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:29.290941  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:29.290961  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:29.290968  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:29 GMT
	I1226 22:24:29.291088  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wm58w","generateName":"kube-proxy-","namespace":"kube-system","uid":"88096fdf-439c-4992-bb8c-09c32a616dda","resourceVersion":"496","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f2bb5d32-e46e-4c09-914a-6e81f727613f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f2bb5d32-e46e-4c09-914a-6e81f727613f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1226 22:24:29.487832  766058 request.go:629] Waited for 196.26377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:29.487956  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557-m02
	I1226 22:24:29.487977  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:29.487987  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:29.487995  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:29.491233  766058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:24:29.491263  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:29.491274  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:29.491281  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:29.491287  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:29.491296  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:29.491305  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:29 GMT
	I1226 22:24:29.491312  766058 round_trippers.go:580]     Audit-Id: bad3a149-db25-415e-a4bd-fa48a9767d92
	I1226 22:24:29.491439  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557-m02","uid":"56c9fe72-f217-40c2-8ce0-de101b7e868f","resourceVersion":"531","creationTimestamp":"2023-12-26T22:23:56Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_26T22_23_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:23:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 5930 chars]
	I1226 22:24:29.491868  766058 pod_ready.go:92] pod "kube-proxy-wm58w" in "kube-system" namespace has status "Ready":"True"
	I1226 22:24:29.491887  766058 pod_ready.go:81] duration metric: took 400.359418ms waiting for pod "kube-proxy-wm58w" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:29.491901  766058 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:29.688180  766058 request.go:629] Waited for 196.18703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-772557
	I1226 22:24:29.688259  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-772557
	I1226 22:24:29.688271  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:29.688284  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:29.688294  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:29.690837  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:29.690904  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:29.690918  766058 round_trippers.go:580]     Audit-Id: c41f549b-86d7-4cd3-8fd6-9503c60c4433
	I1226 22:24:29.690926  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:29.690933  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:29.690940  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:29.690953  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:29.690972  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:29 GMT
	I1226 22:24:29.691122  766058 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-772557","namespace":"kube-system","uid":"b424c74a-800c-4bd8-b8d3-ac5bb5afe0ba","resourceVersion":"292","creationTimestamp":"2023-12-26T22:22:53Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3ae379e0edc009083526191a36073f44","kubernetes.io/config.mirror":"3ae379e0edc009083526191a36073f44","kubernetes.io/config.seen":"2023-12-26T22:22:53.416329340Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-26T22:22:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1226 22:24:29.887812  766058 request.go:629] Waited for 196.261908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:24:29.887907  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-772557
	I1226 22:24:29.887918  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:29.887928  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:29.887935  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:29.890635  766058 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1226 22:24:29.890666  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:29.890675  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:29.890682  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:29 GMT
	I1226 22:24:29.890688  766058 round_trippers.go:580]     Audit-Id: 569f1418-0a52-4dda-9424-f33c2f0d5818
	I1226 22:24:29.890707  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:29.890728  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:29.890748  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:29.890882  766058 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-26T22:22:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1226 22:24:29.891321  766058 pod_ready.go:92] pod "kube-scheduler-multinode-772557" in "kube-system" namespace has status "Ready":"True"
	I1226 22:24:29.891337  766058 pod_ready.go:81] duration metric: took 399.428773ms waiting for pod "kube-scheduler-multinode-772557" in "kube-system" namespace to be "Ready" ...
	I1226 22:24:29.891349  766058 pod_ready.go:38] duration metric: took 1.199766622s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1226 22:24:29.891380  766058 system_svc.go:44] waiting for kubelet service to be running ....
	I1226 22:24:29.891451  766058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:24:29.906447  766058 system_svc.go:56] duration metric: took 15.061037ms WaitForService to wait for kubelet.
	I1226 22:24:29.906473  766058 kubeadm.go:581] duration metric: took 31.745328596s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1226 22:24:29.906493  766058 node_conditions.go:102] verifying NodePressure condition ...
	I1226 22:24:30.087911  766058 request.go:629] Waited for 181.325899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1226 22:24:30.087971  766058 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1226 22:24:30.087978  766058 round_trippers.go:469] Request Headers:
	I1226 22:24:30.087987  766058 round_trippers.go:473]     Accept: application/json, */*
	I1226 22:24:30.087998  766058 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1226 22:24:30.091348  766058 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1226 22:24:30.091391  766058 round_trippers.go:577] Response Headers:
	I1226 22:24:30.091401  766058 round_trippers.go:580]     Content-Type: application/json
	I1226 22:24:30.091408  766058 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61120239-410e-4837-a672-f99fa7f552cc
	I1226 22:24:30.091415  766058 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6ebd83c8-1352-4560-af80-4dd1401e66cb
	I1226 22:24:30.091421  766058 round_trippers.go:580]     Date: Tue, 26 Dec 2023 22:24:30 GMT
	I1226 22:24:30.091429  766058 round_trippers.go:580]     Audit-Id: ce3f5fe3-6c64-4f66-b70a-7a9d2b428ada
	I1226 22:24:30.091441  766058 round_trippers.go:580]     Cache-Control: no-cache, private
	I1226 22:24:30.091646  766058 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"multinode-772557","uid":"a7444ec9-63ae-491f-8578-232ee0dfb431","resourceVersion":"426","creationTimestamp":"2023-12-26T22:22:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-772557","kubernetes.io/os":"linux","minikube.k8s.io/commit":"393f165ced08f66e4386491f243850f87982a22b","minikube.k8s.io/name":"multinode-772557","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_26T22_22_54_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 13004 chars]
	I1226 22:24:30.092375  766058 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1226 22:24:30.092400  766058 node_conditions.go:123] node cpu capacity is 2
	I1226 22:24:30.092410  766058 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1226 22:24:30.092416  766058 node_conditions.go:123] node cpu capacity is 2
	I1226 22:24:30.092421  766058 node_conditions.go:105] duration metric: took 185.923356ms to run NodePressure ...
	I1226 22:24:30.092434  766058 start.go:228] waiting for startup goroutines ...
	I1226 22:24:30.092459  766058 start.go:242] writing updated cluster config ...
	I1226 22:24:30.092834  766058 ssh_runner.go:195] Run: rm -f paused
	I1226 22:24:30.161213  766058 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1226 22:24:30.164895  766058 out.go:177] * Done! kubectl is now configured to use "multinode-772557" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 26 22:23:38 multinode-772557 crio[898]: time="2023-12-26 22:23:38.896950825Z" level=info msg="Starting container: 1ea9c8688b167550b5a307c27aa2d3ac10d35265d764056e129d5c46b3dd689f" id=2d908b50-f46d-491a-9055-1dc35af7676a name=/runtime.v1.RuntimeService/StartContainer
	Dec 26 22:23:38 multinode-772557 crio[898]: time="2023-12-26 22:23:38.913012471Z" level=info msg="Started container" PID=1924 containerID=1ea9c8688b167550b5a307c27aa2d3ac10d35265d764056e129d5c46b3dd689f description=kube-system/storage-provisioner/storage-provisioner id=2d908b50-f46d-491a-9055-1dc35af7676a name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd73a30c209272d2d5270744cfa795169a961c3d2f9f9b4f2e0a6b954721e22a
	Dec 26 22:23:38 multinode-772557 crio[898]: time="2023-12-26 22:23:38.931834979Z" level=info msg="Created container 1c36e5591f3940e70400a881b4965cc1ce5928190631991c874a2f4ae8f25b7c: kube-system/coredns-5dd5756b68-k29sm/coredns" id=7fe3d5e9-3603-4aa5-932d-e9b1262a7bf6 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 26 22:23:38 multinode-772557 crio[898]: time="2023-12-26 22:23:38.932420361Z" level=info msg="Starting container: 1c36e5591f3940e70400a881b4965cc1ce5928190631991c874a2f4ae8f25b7c" id=cf62a9af-cc1e-4c05-83d4-6b799536216d name=/runtime.v1.RuntimeService/StartContainer
	Dec 26 22:23:38 multinode-772557 crio[898]: time="2023-12-26 22:23:38.943084641Z" level=info msg="Started container" PID=1947 containerID=1c36e5591f3940e70400a881b4965cc1ce5928190631991c874a2f4ae8f25b7c description=kube-system/coredns-5dd5756b68-k29sm/coredns id=cf62a9af-cc1e-4c05-83d4-6b799536216d name=/runtime.v1.RuntimeService/StartContainer sandboxID=5bbbbba189cc5545bcab858fbbe4247ca77a0d83856cd61029ffee2ff123f6a9
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.238952927Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-ls5rz/POD" id=064bd34a-c22e-4ac5-9ee7-d53674769c14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.239021528Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.262085843Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-ls5rz Namespace:default ID:374ab7432323120550e0b9bd880b917868869a699e2e5d427fd69b563ef1483f UID:69a83cc9-8ea8-45d2-a403-bd84f0426741 NetNS:/var/run/netns/afbc7ad8-07e4-4469-ad71-891b9c0fda03 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.262126991Z" level=info msg="Adding pod default_busybox-5bc68d56bd-ls5rz to CNI network \"kindnet\" (type=ptp)"
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.277775936Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-ls5rz Namespace:default ID:374ab7432323120550e0b9bd880b917868869a699e2e5d427fd69b563ef1483f UID:69a83cc9-8ea8-45d2-a403-bd84f0426741 NetNS:/var/run/netns/afbc7ad8-07e4-4469-ad71-891b9c0fda03 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.277926062Z" level=info msg="Checking pod default_busybox-5bc68d56bd-ls5rz for CNI network kindnet (type=ptp)"
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.280375614Z" level=info msg="Ran pod sandbox 374ab7432323120550e0b9bd880b917868869a699e2e5d427fd69b563ef1483f with infra container: default/busybox-5bc68d56bd-ls5rz/POD" id=064bd34a-c22e-4ac5-9ee7-d53674769c14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.284496077Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=3f5d5c42-16f4-47c9-b232-c42a2cb2b6a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.284737573Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=3f5d5c42-16f4-47c9-b232-c42a2cb2b6a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.287356786Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=68cfaaff-79cd-46d0-9a83-4436dcfc9452 name=/runtime.v1.ImageService/PullImage
	Dec 26 22:24:33 multinode-772557 crio[898]: time="2023-12-26 22:24:33.293484055Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 26 22:24:34 multinode-772557 crio[898]: time="2023-12-26 22:24:34.036845474Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 26 22:24:35 multinode-772557 crio[898]: time="2023-12-26 22:24:35.121225786Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=68cfaaff-79cd-46d0-9a83-4436dcfc9452 name=/runtime.v1.ImageService/PullImage
	Dec 26 22:24:35 multinode-772557 crio[898]: time="2023-12-26 22:24:35.124618463Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=e5017ecb-c4e5-4306-8a98-53f7a73fef4e name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:24:35 multinode-772557 crio[898]: time="2023-12-26 22:24:35.125471046Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e5017ecb-c4e5-4306-8a98-53f7a73fef4e name=/runtime.v1.ImageService/ImageStatus
	Dec 26 22:24:35 multinode-772557 crio[898]: time="2023-12-26 22:24:35.126759378Z" level=info msg="Creating container: default/busybox-5bc68d56bd-ls5rz/busybox" id=979b5aa1-021f-4951-bee1-1bb62183b9a9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 26 22:24:35 multinode-772557 crio[898]: time="2023-12-26 22:24:35.127029123Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 26 22:24:35 multinode-772557 crio[898]: time="2023-12-26 22:24:35.192726550Z" level=info msg="Created container 4d693db09def17b8d72779e07f37851ed18743733af579139fb636e78bcaf5e1: default/busybox-5bc68d56bd-ls5rz/busybox" id=979b5aa1-021f-4951-bee1-1bb62183b9a9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 26 22:24:35 multinode-772557 crio[898]: time="2023-12-26 22:24:35.193701921Z" level=info msg="Starting container: 4d693db09def17b8d72779e07f37851ed18743733af579139fb636e78bcaf5e1" id=919191a4-3d67-40af-a582-c95b2b999c70 name=/runtime.v1.RuntimeService/StartContainer
	Dec 26 22:24:35 multinode-772557 crio[898]: time="2023-12-26 22:24:35.203908882Z" level=info msg="Started container" PID=2086 containerID=4d693db09def17b8d72779e07f37851ed18743733af579139fb636e78bcaf5e1 description=default/busybox-5bc68d56bd-ls5rz/busybox id=919191a4-3d67-40af-a582-c95b2b999c70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=374ab7432323120550e0b9bd880b917868869a699e2e5d427fd69b563ef1483f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4d693db09def1       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   374ab74323231       busybox-5bc68d56bd-ls5rz
	1c36e5591f394       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   5bbbbba189cc5       coredns-5dd5756b68-k29sm
	1ea9c8688b167       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   dd73a30c20927       storage-provisioner
	3a555e04ba8e4       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   c57678bacde6c       kindnet-xkncj
	8f02d9253a640       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      About a minute ago   Running             kube-proxy                0                   d4ecd6ac218a3       kube-proxy-q2rbf
	f199c048ed0c8       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   8bdcdb4ddca6d       kube-apiserver-multinode-772557
	9aca290a77af6       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   7f4d091934c4a       kube-controller-manager-multinode-772557
	2fa5bf25b9003       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   72fa5cef92594       kube-scheduler-multinode-772557
	6523b324b3315       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   da30628670e98       etcd-multinode-772557
	
	
	==> coredns [1c36e5591f3940e70400a881b4965cc1ce5928190631991c874a2f4ae8f25b7c] <==
	[INFO] 10.244.0.3:41180 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098836s
	[INFO] 10.244.1.2:52068 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128801s
	[INFO] 10.244.1.2:48644 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001424065s
	[INFO] 10.244.1.2:47998 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082993s
	[INFO] 10.244.1.2:40671 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076043s
	[INFO] 10.244.1.2:51781 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001953899s
	[INFO] 10.244.1.2:48896 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005942s
	[INFO] 10.244.1.2:51457 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048228s
	[INFO] 10.244.1.2:59796 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064499s
	[INFO] 10.244.0.3:45358 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105294s
	[INFO] 10.244.0.3:60250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061874s
	[INFO] 10.244.0.3:50176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006161s
	[INFO] 10.244.0.3:53084 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080227s
	[INFO] 10.244.1.2:46641 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118955s
	[INFO] 10.244.1.2:49405 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072326s
	[INFO] 10.244.1.2:52030 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074493s
	[INFO] 10.244.1.2:36224 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085315s
	[INFO] 10.244.0.3:53739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126971s
	[INFO] 10.244.0.3:40385 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000394986s
	[INFO] 10.244.0.3:37894 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149231s
	[INFO] 10.244.0.3:53490 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000212s
	[INFO] 10.244.1.2:39727 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145129s
	[INFO] 10.244.1.2:38806 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050985s
	[INFO] 10.244.1.2:56199 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000056056s
	[INFO] 10.244.1.2:46203 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000047687s
	
	
	==> describe nodes <==
	Name:               multinode-772557
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-772557
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=multinode-772557
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_26T22_22_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 22:22:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-772557
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 22:24:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 22:23:38 +0000   Tue, 26 Dec 2023 22:22:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 22:23:38 +0000   Tue, 26 Dec 2023 22:22:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 22:23:38 +0000   Tue, 26 Dec 2023 22:22:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 22:23:38 +0000   Tue, 26 Dec 2023 22:23:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-772557
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c1d3328461c4dc0b4ef20fb5271e6ec
	  System UUID:                195669d4-f193-4dea-866d-4e83c67abf5b
	  Boot ID:                    f8f887b2-8c20-433d-a967-90e814370f09
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ls5rz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5dd5756b68-k29sm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     93s
	  kube-system                 etcd-multinode-772557                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         107s
	  kube-system                 kindnet-xkncj                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      94s
	  kube-system                 kube-apiserver-multinode-772557             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-controller-manager-multinode-772557    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-q2rbf                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-scheduler-multinode-772557             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node multinode-772557 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node multinode-772557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node multinode-772557 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s                 kubelet          Node multinode-772557 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s                 kubelet          Node multinode-772557 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s                 kubelet          Node multinode-772557 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           94s                  node-controller  Node multinode-772557 event: Registered Node multinode-772557 in Controller
	  Normal  NodeReady                62s                  kubelet          Node multinode-772557 status is now: NodeReady
	
	
	Name:               multinode-772557-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-772557-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=393f165ced08f66e4386491f243850f87982a22b
	                    minikube.k8s.io/name=multinode-772557
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_26T22_23_57_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 26 Dec 2023 22:23:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-772557-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 26 Dec 2023 22:24:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 26 Dec 2023 22:24:28 +0000   Tue, 26 Dec 2023 22:23:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 26 Dec 2023 22:24:28 +0000   Tue, 26 Dec 2023 22:23:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 26 Dec 2023 22:24:28 +0000   Tue, 26 Dec 2023 22:23:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 26 Dec 2023 22:24:28 +0000   Tue, 26 Dec 2023 22:24:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-772557-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 b0dd748e489a42fca760a6e3a0d2808c
	  System UUID:                d325553c-a670-4dab-91e9-8b7f8ae3e458
	  Boot ID:                    f8f887b2-8c20-433d-a967-90e814370f09
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-sffk7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-dbr68               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      44s
	  kube-system                 kube-proxy-wm58w            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  NodeHasSufficientMemory  44s (x5 over 45s)  kubelet          Node multinode-772557-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x5 over 45s)  kubelet          Node multinode-772557-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x5 over 45s)  kubelet          Node multinode-772557-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node multinode-772557-m02 event: Registered Node multinode-772557-m02 in Controller
	  Normal  NodeReady                12s                kubelet          Node multinode-772557-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001236] FS-Cache: O-key=[8] '14613b0000000000'
	[  +0.000818] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001056] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000146f85f7
	[  +0.001167] FS-Cache: N-key=[8] '14613b0000000000'
	[  +0.003514] FS-Cache: Duplicate cookie detected
	[  +0.000807] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001079] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=0000000079163410
	[  +0.001150] FS-Cache: O-key=[8] '14613b0000000000'
	[  +0.000783] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001037] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000f8dfdcd3
	[  +0.001195] FS-Cache: N-key=[8] '14613b0000000000'
	[  +2.993685] FS-Cache: Duplicate cookie detected
	[  +0.000876] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001217] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=00000000ec3309d9
	[  +0.001222] FS-Cache: O-key=[8] '13613b0000000000'
	[  +0.000879] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001151] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=00000000cf0f6968
	[  +0.001228] FS-Cache: N-key=[8] '13613b0000000000'
	[  +0.372532] FS-Cache: Duplicate cookie detected
	[  +0.000898] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001163] FS-Cache: O-cookie d=00000000b9607d6a{9p.inode} n=0000000068094209
	[  +0.001226] FS-Cache: O-key=[8] '19613b0000000000'
	[  +0.000831] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001168] FS-Cache: N-cookie d=00000000b9607d6a{9p.inode} n=0000000030afdcd3
	[  +0.001211] FS-Cache: N-key=[8] '19613b0000000000'
	
	
	==> etcd [6523b324b33155b61b9f819d14237ed76e97b02720c7d82e3082f624837934ab] <==
	{"level":"info","ts":"2023-12-26T22:22:45.841994Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-26T22:22:45.842273Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-26T22:22:45.842358Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-26T22:22:45.842511Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-26T22:22:45.842553Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-26T22:22:45.84319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-12-26T22:22:45.843365Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-12-26T22:22:46.796556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-26T22:22:46.796675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-26T22:22:46.796732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-26T22:22:46.796775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-26T22:22:46.796816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-26T22:22:46.796858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-26T22:22:46.796893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-26T22:22:46.800697Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-772557 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-26T22:22:46.80087Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T22:22:46.801797Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-26T22:22:46.801906Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:22:46.802976Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-26T22:22:46.809012Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:22:46.809196Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:22:46.809264Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-26T22:22:46.801833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-12-26T22:22:46.812545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-26T22:22:46.812577Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:24:40 up  6:06,  0 users,  load average: 1.07, 1.59, 1.32
	Linux multinode-772557 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [3a555e04ba8e4ae087b9c170d2307a4fbb9b512853d62876a1395095f538a454] <==
	I1226 22:23:37.886431       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:23:37.886461       1 main.go:227] handling current node
	I1226 22:23:47.900940       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:23:47.900969       1 main.go:227] handling current node
	I1226 22:23:57.915171       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:23:57.915269       1 main.go:227] handling current node
	I1226 22:23:57.915305       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1226 22:23:57.915493       1 main.go:250] Node multinode-772557-m02 has CIDR [10.244.1.0/24] 
	I1226 22:23:57.915687       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1226 22:24:07.919798       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:24:07.919824       1 main.go:227] handling current node
	I1226 22:24:07.919835       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1226 22:24:07.919840       1 main.go:250] Node multinode-772557-m02 has CIDR [10.244.1.0/24] 
	I1226 22:24:17.926784       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:24:17.926814       1 main.go:227] handling current node
	I1226 22:24:17.926827       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1226 22:24:17.926835       1 main.go:250] Node multinode-772557-m02 has CIDR [10.244.1.0/24] 
	I1226 22:24:27.937030       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:24:27.937057       1 main.go:227] handling current node
	I1226 22:24:27.937069       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1226 22:24:27.937074       1 main.go:250] Node multinode-772557-m02 has CIDR [10.244.1.0/24] 
	I1226 22:24:37.941501       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1226 22:24:37.942551       1 main.go:227] handling current node
	I1226 22:24:37.942626       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1226 22:24:37.942707       1 main.go:250] Node multinode-772557-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [f199c048ed0c8c4d2c15587ddaeebf7229fcbf6c780ee570d6b1f59ef7fcdc20] <==
	I1226 22:22:49.988456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1226 22:22:49.988497       1 cache.go:39] Caches are synced for autoregister controller
	I1226 22:22:50.018076       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1226 22:22:50.186270       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1226 22:22:50.787907       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1226 22:22:50.792961       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1226 22:22:50.792986       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1226 22:22:51.441856       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1226 22:22:51.482853       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1226 22:22:51.593869       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1226 22:22:51.602750       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1226 22:22:51.603945       1 controller.go:624] quota admission added evaluator for: endpoints
	I1226 22:22:51.609218       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1226 22:22:51.927282       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1226 22:22:53.318882       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1226 22:22:53.335263       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1226 22:22:53.350563       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1226 22:23:06.110722       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1226 22:23:06.700625       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1226 22:23:56.342732       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E1226 22:23:56.342765       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E1226 22:23:56.344379       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E1226 22:23:56.344575       1 timeout.go:142] post-timeout activity - time-elapsed: 2.134687ms, GET "/apis/storage.k8s.io/v1/csidrivers" result: <nil>
	E1226 22:24:36.928678       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:39994->192.168.58.2:10250: write: broken pipe
	E1226 22:24:37.406281       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:57996: write: broken pipe
	
	
	==> kube-controller-manager [9aca290a77af6198784166f75b006c391a4ca5960499cd3cb7b6f170f6cd433e] <==
	I1226 22:23:08.163819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="158.306µs"
	I1226 22:23:38.157708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.599µs"
	I1226 22:23:38.181890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="127.562µs"
	I1226 22:23:39.714001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.085599ms"
	I1226 22:23:39.714467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.606µs"
	I1226 22:23:41.100134       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1226 22:23:56.904827       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-772557-m02\" does not exist"
	I1226 22:23:56.922079       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-772557-m02" podCIDRs=["10.244.1.0/24"]
	I1226 22:23:56.933025       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dbr68"
	I1226 22:23:56.933170       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wm58w"
	I1226 22:24:01.103914       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-772557-m02"
	I1226 22:24:01.103993       1 event.go:307] "Event occurred" object="multinode-772557-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-772557-m02 event: Registered Node multinode-772557-m02 in Controller"
	I1226 22:24:28.594437       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-772557-m02"
	I1226 22:24:31.062794       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1226 22:24:31.093199       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-sffk7"
	I1226 22:24:31.110623       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-ls5rz"
	I1226 22:24:31.143721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="80.547006ms"
	I1226 22:24:31.144772       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-sffk7" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-sffk7"
	I1226 22:24:31.169411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="25.561177ms"
	I1226 22:24:31.185677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.206093ms"
	I1226 22:24:31.185819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.334µs"
	I1226 22:24:33.586762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.756526ms"
	I1226 22:24:33.588061       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.683µs"
	I1226 22:24:35.789883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.729204ms"
	I1226 22:24:35.790358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.499µs"
	
	
	==> kube-proxy [8f02d9253a6403a78e97fed13990f063580a0570d48cd9e6149863d2d0798f0f] <==
	I1226 22:23:08.150801       1 server_others.go:69] "Using iptables proxy"
	I1226 22:23:08.197172       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1226 22:23:08.251087       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1226 22:23:08.253677       1 server_others.go:152] "Using iptables Proxier"
	I1226 22:23:08.253714       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1226 22:23:08.253722       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1226 22:23:08.253769       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1226 22:23:08.254023       1 server.go:846] "Version info" version="v1.28.4"
	I1226 22:23:08.254041       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1226 22:23:08.255532       1 config.go:188] "Starting service config controller"
	I1226 22:23:08.255546       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1226 22:23:08.255564       1 config.go:97] "Starting endpoint slice config controller"
	I1226 22:23:08.255568       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1226 22:23:08.255899       1 config.go:315] "Starting node config controller"
	I1226 22:23:08.255916       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1226 22:23:08.356508       1 shared_informer.go:318] Caches are synced for node config
	I1226 22:23:08.356562       1 shared_informer.go:318] Caches are synced for service config
	I1226 22:23:08.356585       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2fa5bf25b9003c19cf30731a663e2de0ac935c5a3d6d8c5b45c228c4f17f9964] <==
	W1226 22:22:50.142774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1226 22:22:50.143261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1226 22:22:50.142833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1226 22:22:50.143336       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1226 22:22:50.142881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1226 22:22:50.143402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1226 22:22:50.142893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 22:22:50.143469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1226 22:22:50.983533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1226 22:22:50.984223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1226 22:22:51.012183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1226 22:22:51.012308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1226 22:22:51.044673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1226 22:22:51.044850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1226 22:22:51.081460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1226 22:22:51.081590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1226 22:22:51.175344       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1226 22:22:51.175390       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1226 22:22:51.198191       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1226 22:22:51.198227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1226 22:22:51.215285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1226 22:22:51.215413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1226 22:22:51.239982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1226 22:22:51.240020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1226 22:22:51.724219       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 26 22:23:06 multinode-772557 kubelet[1385]: I1226 22:23:06.832378    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbf3a7e0-3a68-43d0-9ec1-5ec07e8f72ca-lib-modules\") pod \"kindnet-xkncj\" (UID: \"dbf3a7e0-3a68-43d0-9ec1-5ec07e8f72ca\") " pod="kube-system/kindnet-xkncj"
	Dec 26 22:23:06 multinode-772557 kubelet[1385]: I1226 22:23:06.832400    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flbh\" (UniqueName: \"kubernetes.io/projected/4ef274a5-a036-4559-babc-232be6318956-kube-api-access-4flbh\") pod \"kube-proxy-q2rbf\" (UID: \"4ef274a5-a036-4559-babc-232be6318956\") " pod="kube-system/kube-proxy-q2rbf"
	Dec 26 22:23:06 multinode-772557 kubelet[1385]: I1226 22:23:06.832427    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dbf3a7e0-3a68-43d0-9ec1-5ec07e8f72ca-cni-cfg\") pod \"kindnet-xkncj\" (UID: \"dbf3a7e0-3a68-43d0-9ec1-5ec07e8f72ca\") " pod="kube-system/kindnet-xkncj"
	Dec 26 22:23:06 multinode-772557 kubelet[1385]: I1226 22:23:06.832453    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ef274a5-a036-4559-babc-232be6318956-kube-proxy\") pod \"kube-proxy-q2rbf\" (UID: \"4ef274a5-a036-4559-babc-232be6318956\") " pod="kube-system/kube-proxy-q2rbf"
	Dec 26 22:23:07 multinode-772557 kubelet[1385]: W1226 22:23:07.058286    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430/crio-d4ecd6ac218a3a9015c4f0df11d0021f3a849e04756840ad1beda5aa94eca3b8 WatchSource:0}: Error finding container d4ecd6ac218a3a9015c4f0df11d0021f3a849e04756840ad1beda5aa94eca3b8: Status 404 returned error can't find the container with id d4ecd6ac218a3a9015c4f0df11d0021f3a849e04756840ad1beda5aa94eca3b8
	Dec 26 22:23:07 multinode-772557 kubelet[1385]: W1226 22:23:07.070909    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430/crio-c57678bacde6c1666c2424f9113e815595ba7a531bb8a7fea8502eb458cb9aea WatchSource:0}: Error finding container c57678bacde6c1666c2424f9113e815595ba7a531bb8a7fea8502eb458cb9aea: Status 404 returned error can't find the container with id c57678bacde6c1666c2424f9113e815595ba7a531bb8a7fea8502eb458cb9aea
	Dec 26 22:23:08 multinode-772557 kubelet[1385]: I1226 22:23:08.039731    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xkncj" podStartSLOduration=2.039567945 podCreationTimestamp="2023-12-26 22:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:23:07.883963391 +0000 UTC m=+14.598706526" watchObservedRunningTime="2023-12-26 22:23:08.039567945 +0000 UTC m=+14.754311071"
	Dec 26 22:23:38 multinode-772557 kubelet[1385]: I1226 22:23:38.129844    1385 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 26 22:23:38 multinode-772557 kubelet[1385]: I1226 22:23:38.156982    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q2rbf" podStartSLOduration=32.156919725 podCreationTimestamp="2023-12-26 22:23:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:23:08.056011735 +0000 UTC m=+14.770754853" watchObservedRunningTime="2023-12-26 22:23:38.156919725 +0000 UTC m=+44.871662875"
	Dec 26 22:23:38 multinode-772557 kubelet[1385]: I1226 22:23:38.157415    1385 topology_manager.go:215] "Topology Admit Handler" podUID="931cdf23-56fe-45a4-afb5-7d30cf6c7d97" podNamespace="kube-system" podName="coredns-5dd5756b68-k29sm"
	Dec 26 22:23:38 multinode-772557 kubelet[1385]: I1226 22:23:38.160314    1385 topology_manager.go:215] "Topology Admit Handler" podUID="f7fbeb0e-5dd7-4776-a9b6-5e219f6c6e4b" podNamespace="kube-system" podName="storage-provisioner"
	Dec 26 22:23:38 multinode-772557 kubelet[1385]: I1226 22:23:38.353441    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfljh\" (UniqueName: \"kubernetes.io/projected/931cdf23-56fe-45a4-afb5-7d30cf6c7d97-kube-api-access-lfljh\") pod \"coredns-5dd5756b68-k29sm\" (UID: \"931cdf23-56fe-45a4-afb5-7d30cf6c7d97\") " pod="kube-system/coredns-5dd5756b68-k29sm"
	Dec 26 22:23:38 multinode-772557 kubelet[1385]: I1226 22:23:38.353496    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f7fbeb0e-5dd7-4776-a9b6-5e219f6c6e4b-tmp\") pod \"storage-provisioner\" (UID: \"f7fbeb0e-5dd7-4776-a9b6-5e219f6c6e4b\") " pod="kube-system/storage-provisioner"
	Dec 26 22:23:38 multinode-772557 kubelet[1385]: I1226 22:23:38.353529    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/931cdf23-56fe-45a4-afb5-7d30cf6c7d97-config-volume\") pod \"coredns-5dd5756b68-k29sm\" (UID: \"931cdf23-56fe-45a4-afb5-7d30cf6c7d97\") " pod="kube-system/coredns-5dd5756b68-k29sm"
	Dec 26 22:23:38 multinode-772557 kubelet[1385]: I1226 22:23:38.353553    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2jq4\" (UniqueName: \"kubernetes.io/projected/f7fbeb0e-5dd7-4776-a9b6-5e219f6c6e4b-kube-api-access-t2jq4\") pod \"storage-provisioner\" (UID: \"f7fbeb0e-5dd7-4776-a9b6-5e219f6c6e4b\") " pod="kube-system/storage-provisioner"
	Dec 26 22:23:39 multinode-772557 kubelet[1385]: I1226 22:23:39.699075    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.69902082 podCreationTimestamp="2023-12-26 22:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:23:39.687060988 +0000 UTC m=+46.401804106" watchObservedRunningTime="2023-12-26 22:23:39.69902082 +0000 UTC m=+46.413763938"
	Dec 26 22:24:31 multinode-772557 kubelet[1385]: I1226 22:24:31.136817    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-k29sm" podStartSLOduration=84.136777448 podCreationTimestamp="2023-12-26 22:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-26 22:23:39.699639653 +0000 UTC m=+46.414382804" watchObservedRunningTime="2023-12-26 22:24:31.136777448 +0000 UTC m=+97.851520566"
	Dec 26 22:24:31 multinode-772557 kubelet[1385]: I1226 22:24:31.136958    1385 topology_manager.go:215] "Topology Admit Handler" podUID="69a83cc9-8ea8-45d2-a403-bd84f0426741" podNamespace="default" podName="busybox-5bc68d56bd-ls5rz"
	Dec 26 22:24:31 multinode-772557 kubelet[1385]: W1226 22:24:31.148409    1385 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-772557" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-772557' and this object
	Dec 26 22:24:31 multinode-772557 kubelet[1385]: E1226 22:24:31.148481    1385 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-772557" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-772557' and this object
	Dec 26 22:24:31 multinode-772557 kubelet[1385]: I1226 22:24:31.281996    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9h2l\" (UniqueName: \"kubernetes.io/projected/69a83cc9-8ea8-45d2-a403-bd84f0426741-kube-api-access-h9h2l\") pod \"busybox-5bc68d56bd-ls5rz\" (UID: \"69a83cc9-8ea8-45d2-a403-bd84f0426741\") " pod="default/busybox-5bc68d56bd-ls5rz"
	Dec 26 22:24:32 multinode-772557 kubelet[1385]: E1226 22:24:32.392728    1385 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 26 22:24:32 multinode-772557 kubelet[1385]: E1226 22:24:32.392783    1385 projected.go:198] Error preparing data for projected volume kube-api-access-h9h2l for pod default/busybox-5bc68d56bd-ls5rz: failed to sync configmap cache: timed out waiting for the condition
	Dec 26 22:24:32 multinode-772557 kubelet[1385]: E1226 22:24:32.392881    1385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69a83cc9-8ea8-45d2-a403-bd84f0426741-kube-api-access-h9h2l podName:69a83cc9-8ea8-45d2-a403-bd84f0426741 nodeName:}" failed. No retries permitted until 2023-12-26 22:24:32.892852175 +0000 UTC m=+99.607595293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h9h2l" (UniqueName: "kubernetes.io/projected/69a83cc9-8ea8-45d2-a403-bd84f0426741-kube-api-access-h9h2l") pod "busybox-5bc68d56bd-ls5rz" (UID: "69a83cc9-8ea8-45d2-a403-bd84f0426741") : failed to sync configmap cache: timed out waiting for the condition
	Dec 26 22:24:36 multinode-772557 kubelet[1385]: E1226 22:24:36.927186    1385 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52072->127.0.0.1:44297: write tcp 127.0.0.1:52072->127.0.0.1:44297: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-772557 -n multinode-772557
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-772557 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.37s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.3791352332.exe start -p running-upgrade-415104 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.3791352332.exe start -p running-upgrade-415104 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.958733234s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-415104 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-415104 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (4.269139835s)

                                                
                                                
-- stdout --
	* [running-upgrade-415104] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-415104 in cluster running-upgrade-415104
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Updating the running docker "running-upgrade-415104" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:40:42.755365  826117 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:40:42.755553  826117 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:40:42.755566  826117 out.go:309] Setting ErrFile to fd 2...
	I1226 22:40:42.755574  826117 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:40:42.755851  826117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:40:42.756327  826117 out.go:303] Setting JSON to false
	I1226 22:40:42.757404  826117 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22977,"bootTime":1703607466,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:40:42.757482  826117 start.go:138] virtualization:  
	I1226 22:40:42.759913  826117 out.go:177] * [running-upgrade-415104] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 22:40:42.763314  826117 notify.go:220] Checking for updates...
	I1226 22:40:42.763395  826117 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1226 22:40:42.765990  826117 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:40:42.768160  826117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:40:42.770472  826117 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:40:42.773151  826117 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:40:42.774845  826117 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:40:42.776615  826117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:40:42.779374  826117 config.go:182] Loaded profile config "running-upgrade-415104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1226 22:40:42.781680  826117 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1226 22:40:42.783645  826117 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:40:42.821965  826117 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:40:42.822074  826117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:40:42.990757  826117 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-26 22:40:42.97711947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:40:42.990871  826117 docker.go:295] overlay module found
	I1226 22:40:42.993564  826117 out.go:177] * Using the docker driver based on existing profile
	I1226 22:40:42.996851  826117 start.go:298] selected driver: docker
	I1226 22:40:42.996879  826117 start.go:902] validating driver "docker" against &{Name:running-upgrade-415104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-415104 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.129 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 22:40:42.996980  826117 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:40:42.997633  826117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:40:43.191095  826117 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1226 22:40:43.198251  826117 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-26 22:40:43.181698745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:40:43.198565  826117 cni.go:84] Creating CNI manager for ""
	I1226 22:40:43.198578  826117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:40:43.198590  826117 start_flags.go:323] config:
	{Name:running-upgrade-415104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-415104 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.129 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 22:40:43.201258  826117 out.go:177] * Starting control plane node running-upgrade-415104 in cluster running-upgrade-415104
	I1226 22:40:43.203848  826117 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:40:43.205967  826117 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:40:43.207959  826117 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1226 22:40:43.208130  826117 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1226 22:40:43.231283  826117 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1226 22:40:43.231305  826117 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1226 22:40:43.277020  826117 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1226 22:40:43.277153  826117 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/running-upgrade-415104/config.json ...
	I1226 22:40:43.277395  826117 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:40:43.277430  826117 start.go:365] acquiring machines lock for running-upgrade-415104: {Name:mk741a66a7abadd3b86d70f674f47a0ef2099408 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:40:43.277483  826117 start.go:369] acquired machines lock for "running-upgrade-415104" in 31.047µs
	I1226 22:40:43.277497  826117 start.go:96] Skipping create...Using existing machine configuration
	I1226 22:40:43.277503  826117 fix.go:54] fixHost starting: 
	I1226 22:40:43.277761  826117 cli_runner.go:164] Run: docker container inspect running-upgrade-415104 --format={{.State.Status}}
	I1226 22:40:43.278038  826117 cache.go:107] acquiring lock: {Name:mkb0c415fb66519dc4d25de2f2e85a3d2941a136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:40:43.278102  826117 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1226 22:40:43.278110  826117 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 76.018µs
	I1226 22:40:43.278118  826117 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1226 22:40:43.278128  826117 cache.go:107] acquiring lock: {Name:mka92e10ddf463094441c8aa2f9b2d35bcc9ef4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:40:43.278157  826117 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1226 22:40:43.278162  826117 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 35.027µs
	I1226 22:40:43.278168  826117 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1226 22:40:43.278177  826117 cache.go:107] acquiring lock: {Name:mk7865403d3c41f6f816dbb1cbc2462d431f3f1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:40:43.278203  826117 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1226 22:40:43.278210  826117 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.737µs
	I1226 22:40:43.278218  826117 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1226 22:40:43.278230  826117 cache.go:107] acquiring lock: {Name:mk3b7a098e6afe4fbdba7020b7ff9912a911d97a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:40:43.278256  826117 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1226 22:40:43.278260  826117 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 35.896µs
	I1226 22:40:43.278266  826117 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1226 22:40:43.278274  826117 cache.go:107] acquiring lock: {Name:mk0889c5c52f22b229acaa40a5f123bee3f71fb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:40:43.278297  826117 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1226 22:40:43.278302  826117 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 28.635µs
	I1226 22:40:43.278308  826117 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1226 22:40:43.278315  826117 cache.go:107] acquiring lock: {Name:mk6dd42b4f5730263f9316f596eb75b275e6a548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:40:43.278340  826117 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1226 22:40:43.278344  826117 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 30.047µs
	I1226 22:40:43.278350  826117 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1226 22:40:43.278358  826117 cache.go:107] acquiring lock: {Name:mk1bd0e4578a8b4054cf01e8271f46e2f31f33ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:40:43.278388  826117 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1226 22:40:43.278392  826117 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 35.478µs
	I1226 22:40:43.278398  826117 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1226 22:40:43.278411  826117 cache.go:107] acquiring lock: {Name:mke4ec52c0c8d7fadbdda753d37368ed659c96e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:40:43.278437  826117 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1226 22:40:43.278441  826117 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 36.643µs
	I1226 22:40:43.278447  826117 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1226 22:40:43.278452  826117 cache.go:87] Successfully saved all images to host disk.
	I1226 22:40:43.309301  826117 fix.go:102] recreateIfNeeded on running-upgrade-415104: state=Running err=<nil>
	W1226 22:40:43.309334  826117 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 22:40:43.312021  826117 out.go:177] * Updating the running docker "running-upgrade-415104" container ...
	I1226 22:40:43.317011  826117 machine.go:88] provisioning docker machine ...
	I1226 22:40:43.317061  826117 ubuntu.go:169] provisioning hostname "running-upgrade-415104"
	I1226 22:40:43.317145  826117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-415104
	I1226 22:40:43.343038  826117 main.go:141] libmachine: Using SSH client type: native
	I1226 22:40:43.343504  826117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1226 22:40:43.343517  826117 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-415104 && echo "running-upgrade-415104" | sudo tee /etc/hostname
	I1226 22:40:43.567269  826117 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-415104
	
	I1226 22:40:43.567343  826117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-415104
	I1226 22:40:43.595119  826117 main.go:141] libmachine: Using SSH client type: native
	I1226 22:40:43.595537  826117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1226 22:40:43.595555  826117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-415104' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-415104/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-415104' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:40:43.778533  826117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:40:43.778570  826117 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 22:40:43.778597  826117 ubuntu.go:177] setting up certificates
	I1226 22:40:43.778610  826117 provision.go:83] configureAuth start
	I1226 22:40:43.778678  826117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-415104
	I1226 22:40:43.810511  826117 provision.go:138] copyHostCerts
	I1226 22:40:43.810597  826117 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem, removing ...
	I1226 22:40:43.810618  826117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:40:43.810693  826117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 22:40:43.810808  826117 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem, removing ...
	I1226 22:40:43.810816  826117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:40:43.810848  826117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 22:40:43.811039  826117 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem, removing ...
	I1226 22:40:43.811045  826117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:40:43.811077  826117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 22:40:43.811133  826117 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-415104 san=[192.168.70.129 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-415104]
	I1226 22:40:44.422479  826117 provision.go:172] copyRemoteCerts
	I1226 22:40:44.422601  826117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:40:44.422664  826117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-415104
	I1226 22:40:44.447464  826117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/running-upgrade-415104/id_rsa Username:docker}
	I1226 22:40:44.547374  826117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:40:44.572409  826117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1226 22:40:44.601091  826117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1226 22:40:44.627270  826117 provision.go:86] duration metric: configureAuth took 848.625832ms
	I1226 22:40:44.627306  826117 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:40:44.627483  826117 config.go:182] Loaded profile config "running-upgrade-415104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1226 22:40:44.627583  826117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-415104
	I1226 22:40:44.654735  826117 main.go:141] libmachine: Using SSH client type: native
	I1226 22:40:44.655178  826117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1226 22:40:44.655200  826117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:40:45.385291  826117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:40:45.385319  826117 machine.go:91] provisioned docker machine in 2.06826793s
	I1226 22:40:45.385332  826117 start.go:300] post-start starting for "running-upgrade-415104" (driver="docker")
	I1226 22:40:45.385343  826117 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:40:45.385406  826117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:40:45.385455  826117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-415104
	I1226 22:40:45.408831  826117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/running-upgrade-415104/id_rsa Username:docker}
	I1226 22:40:45.513634  826117 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:40:45.517726  826117 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:40:45.517750  826117 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:40:45.517763  826117 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:40:45.517769  826117 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1226 22:40:45.517780  826117 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 22:40:45.517841  826117 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 22:40:45.517929  826117 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> 7030362.pem in /etc/ssl/certs
	I1226 22:40:45.518043  826117 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:40:45.527221  826117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:40:45.561107  826117 start.go:303] post-start completed in 175.759808ms
	I1226 22:40:45.561189  826117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:40:45.561239  826117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-415104
	I1226 22:40:45.580574  826117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/running-upgrade-415104/id_rsa Username:docker}
	I1226 22:40:45.680362  826117 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:40:45.687006  826117 fix.go:56] fixHost completed within 2.40949661s
	I1226 22:40:45.687030  826117 start.go:83] releasing machines lock for "running-upgrade-415104", held for 2.409538274s
	I1226 22:40:45.687097  826117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-415104
	I1226 22:40:45.706872  826117 ssh_runner.go:195] Run: cat /version.json
	I1226 22:40:45.706933  826117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-415104
	I1226 22:40:45.707241  826117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:40:45.707296  826117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-415104
	I1226 22:40:45.738736  826117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/running-upgrade-415104/id_rsa Username:docker}
	I1226 22:40:45.739933  826117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/running-upgrade-415104/id_rsa Username:docker}
	W1226 22:40:45.981709  826117 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1226 22:40:45.981891  826117 ssh_runner.go:195] Run: systemctl --version
	I1226 22:40:45.988419  826117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:40:46.104563  826117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:40:46.110596  826117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:40:46.141797  826117 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:40:46.141881  826117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:40:46.180358  826117 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 22:40:46.180385  826117 start.go:475] detecting cgroup driver to use...
	I1226 22:40:46.180418  826117 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:40:46.180483  826117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:40:46.213451  826117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:40:46.226321  826117 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:40:46.226393  826117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:40:46.240687  826117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:40:46.253277  826117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1226 22:40:46.267924  826117 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1226 22:40:46.268033  826117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:40:46.430541  826117 docker.go:219] disabling docker service ...
	I1226 22:40:46.430634  826117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:40:46.447230  826117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:40:46.459825  826117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:40:46.691109  826117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:40:46.890365  826117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:40:46.905213  826117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:40:46.925927  826117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1226 22:40:46.926019  826117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:40:46.939385  826117 out.go:177] 
	W1226 22:40:46.941373  826117 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1226 22:40:46.941433  826117 out.go:239] * 
	* 
	W1226 22:40:46.942515  826117 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 22:40:46.944622  826117 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-415104 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-26 22:40:46.988026873 +0000 UTC m=+3367.641001736
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-415104
helpers_test.go:235: (dbg) docker inspect running-upgrade-415104:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0e12c69662fa4ac6da37a0629796737605d087a9c95c19cad38e591e5518038",
	        "Created": "2023-12-26T22:39:54.815063871Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 822550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T22:39:55.244164609Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/b0e12c69662fa4ac6da37a0629796737605d087a9c95c19cad38e591e5518038/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0e12c69662fa4ac6da37a0629796737605d087a9c95c19cad38e591e5518038/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0e12c69662fa4ac6da37a0629796737605d087a9c95c19cad38e591e5518038/hosts",
	        "LogPath": "/var/lib/docker/containers/b0e12c69662fa4ac6da37a0629796737605d087a9c95c19cad38e591e5518038/b0e12c69662fa4ac6da37a0629796737605d087a9c95c19cad38e591e5518038-json.log",
	        "Name": "/running-upgrade-415104",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-415104:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-415104",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/105157eeecf71f6ecc02140da08069828f01ef7495e8772eddf9aabdc4e4cca2-init/diff:/var/lib/docker/overlay2/aff80e67ef37084c95001ba1645fe003d59f75e0ac4ce6a97eaf5e98ca65986b/diff:/var/lib/docker/overlay2/6ddce3370d95189066a22476d88c1541cb0053e2f84f582ff4704673902295c9/diff:/var/lib/docker/overlay2/d1f4d59aa7e872cbc3462c8583489cf532d1b53f20bf792444a233e424728358/diff:/var/lib/docker/overlay2/924e786130c3c08539b8c35c04e648a4d4490cda5394ad9c8dcc9fb8688d0576/diff:/var/lib/docker/overlay2/de5355dbb6d1afebebf1c3338bb059ec8c542ab19dc64da6688e47bafaf61de6/diff:/var/lib/docker/overlay2/2f761bb26857a7789d517afd4f68a91b2bcd5dfd2fb4dc5d325f05266ae224cd/diff:/var/lib/docker/overlay2/f9635816969bd536ced80b00d0ccbac1d57f30a4750c97d1d59e18c872584ec6/diff:/var/lib/docker/overlay2/0ad6022370e0a5c7b53b6d516417c3c0016557e4c6f3eb5ca88c78e5dca89e23/diff:/var/lib/docker/overlay2/c1af72edaeb4fb107cd7fc958d16b5038a615b1a64a3c5d67cedea59f7f62fc5/diff:/var/lib/docker/overlay2/0c8819
30478d789e0d20557da430b479a8cb8b2859dd7f568ecce4d99859ab27/diff:/var/lib/docker/overlay2/6076fe703e0d6b736ab2dbb1999793b641e40a7a22ae9ebe6f620e16d939d652/diff:/var/lib/docker/overlay2/add2acad624e2e6e55beace00c94b0828eaa2e490f657909bcec77a7f5fdc97f/diff:/var/lib/docker/overlay2/214f39a4d729ec5a77bd514bb457f4b0846f2837488cd8a3273168b280833952/diff:/var/lib/docker/overlay2/11f3ac138f7599824b3e533432e0daaee1a307fba67aa95f20dabbf60eb2a3f0/diff:/var/lib/docker/overlay2/9edd13f066493f4d25ee0afa6ab887b0ff8fcac6b8c19279096bb5b12c9b2d2c/diff:/var/lib/docker/overlay2/2a42b0990c72e84b8c59d44672039bb2b1c023183b070ff8866c5051f3211903/diff:/var/lib/docker/overlay2/d5e40271e48a7a01b8d53cc0e8aa22b6c1433b04a8939a62904ffa4c9901a1b8/diff:/var/lib/docker/overlay2/e4c93f5299e84425537516dd398ce898ef0ad2f41f9ec69a5ef2237e1b17c7d5/diff:/var/lib/docker/overlay2/44efd82990bd64c9dd316bd1b3281e37124a5a8e7a86465b851a4f0dffd3b731/diff:/var/lib/docker/overlay2/11ce1b3a1f58593ab02eaf7e259a4e197c6233e42c5521a471d61808c0ea6875/diff:/var/lib/d
ocker/overlay2/f96972bda05370b5933e8c1a6a2055f27c0c2b2683f3eed5b9f314478b523a9d/diff:/var/lib/docker/overlay2/d04a2f6bae9f32da9e60d3461e185eff86e42a99112f64632157a39046774dd7/diff:/var/lib/docker/overlay2/0da1bfdb1d6e6c5df59981fad0b86d2717ee4880a398794dbe83a6098739dbf0/diff:/var/lib/docker/overlay2/8e5d7dd942e4af05722a4e840d48ca91f9f23db271d8aa6627999d29412142a9/diff:/var/lib/docker/overlay2/34ea282cd38cee4123ba00375f826fad5408f882029906977e60135dcca06f81/diff:/var/lib/docker/overlay2/a7b2d6d927c131d009c54c5c70057dbfceef4620a0d6f618b6887e852d9c2e63/diff:/var/lib/docker/overlay2/79f1494ca8635517457b1f9669bfe6b18daa80238494b0b36f00d9735081a331/diff:/var/lib/docker/overlay2/cba58c07ebbd990eedd7dd338d42d011f94086a86eeb2638c2c480ba577efaae/diff:/var/lib/docker/overlay2/63aa4aac79801b4f220cad13b579dcd594bbedb0bb3f2076eef62a17deb653d4/diff:/var/lib/docker/overlay2/c43be29e78d2f745704d824fe4ca4790f46e4c44c6759538c6a70d0be3637166/diff:/var/lib/docker/overlay2/67eff87ade5ffdbdbff902912bc71632c082aac573ee6d60fc3522d5693
ef129/diff:/var/lib/docker/overlay2/4087b4d9b7a4653df5e0a6b6b890c96e1e1add3e42e2bdb9b51f8caa8579aa7d/diff:/var/lib/docker/overlay2/ef94ff6fc8983bbc46d063de982253af5c9f6382e2867b04b451ab193d2e7ddf/diff:/var/lib/docker/overlay2/ebd2b40ad654fb1dd4dbb19c4e52f582aa95361bdcf8431874cfb1b98bef6005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/105157eeecf71f6ecc02140da08069828f01ef7495e8772eddf9aabdc4e4cca2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/105157eeecf71f6ecc02140da08069828f01ef7495e8772eddf9aabdc4e4cca2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/105157eeecf71f6ecc02140da08069828f01ef7495e8772eddf9aabdc4e4cca2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-415104",
	                "Source": "/var/lib/docker/volumes/running-upgrade-415104/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-415104",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-415104",
	                "name.minikube.sigs.k8s.io": "running-upgrade-415104",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32f5382d8eb6247430e884491b1c081d610b3f11883daef9d736fa42954c8cbb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33855"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33854"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/32f5382d8eb6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-415104": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.129"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b0e12c69662f",
	                        "running-upgrade-415104"
	                    ],
	                    "NetworkID": "e72b42f496a2c68d5d95429e03c7994dc0052ccebfca2be6923be7b2f5e0b9f7",
	                    "EndpointID": "8f32210316d389a151a72f6e9a11192b81868819e12bc818f5b2c4dc957c48d2",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.129",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:81",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-415104 -n running-upgrade-415104
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-415104 -n running-upgrade-415104: exit status 4 (527.912001ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 22:40:47.470382  826851 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-415104" does not appear in /home/jenkins/minikube-integration/17857-697646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-415104" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-415104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-415104
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-415104: (3.354809773s)
--- FAIL: TestRunningBinaryUpgrade (75.37s)

                                                
                                    
x
+
TestMissingContainerUpgrade (182.13s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.1891176363.exe start -p missing-upgrade-099766 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.1891176363.exe start -p missing-upgrade-099766 --memory=2200 --driver=docker  --container-runtime=crio: (2m9.996711393s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-099766
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-099766: (10.352573322s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-099766
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-099766 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-099766 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (38.392440423s)

                                                
                                                
-- stdout --
	* [missing-upgrade-099766] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-099766 in cluster missing-upgrade-099766
	* Pulling base image v0.0.42-1703498848-17857 ...
	* docker "missing-upgrade-099766" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:37:17.193649  812961 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:37:17.193848  812961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:37:17.193861  812961 out.go:309] Setting ErrFile to fd 2...
	I1226 22:37:17.193868  812961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:37:17.194120  812961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:37:17.194531  812961 out.go:303] Setting JSON to false
	I1226 22:37:17.195651  812961 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22771,"bootTime":1703607466,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:37:17.195865  812961 start.go:138] virtualization:  
	I1226 22:37:17.199927  812961 out.go:177] * [missing-upgrade-099766] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 22:37:17.202674  812961 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:37:17.204607  812961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:37:17.202744  812961 notify.go:220] Checking for updates...
	I1226 22:37:17.206899  812961 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:37:17.208948  812961 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:37:17.211025  812961 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:37:17.212787  812961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:37:17.214765  812961 config.go:182] Loaded profile config "missing-upgrade-099766": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1226 22:37:17.217218  812961 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1226 22:37:17.219170  812961 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:37:17.243435  812961 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:37:17.243551  812961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:37:17.325118  812961 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-12-26 22:37:17.314635067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:37:17.325222  812961 docker.go:295] overlay module found
	I1226 22:37:17.327849  812961 out.go:177] * Using the docker driver based on existing profile
	I1226 22:37:17.332458  812961 start.go:298] selected driver: docker
	I1226 22:37:17.332496  812961 start.go:902] validating driver "docker" against &{Name:missing-upgrade-099766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-099766 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.183 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 22:37:17.332625  812961 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:37:17.333250  812961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:37:17.402734  812961 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-12-26 22:37:17.392550647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:37:17.403088  812961 cni.go:84] Creating CNI manager for ""
	I1226 22:37:17.403118  812961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:37:17.403136  812961 start_flags.go:323] config:
	{Name:missing-upgrade-099766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-099766 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.183 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 22:37:17.405782  812961 out.go:177] * Starting control plane node missing-upgrade-099766 in cluster missing-upgrade-099766
	I1226 22:37:17.407767  812961 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:37:17.409746  812961 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:37:17.411531  812961 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1226 22:37:17.411625  812961 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1226 22:37:17.429890  812961 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1226 22:37:17.430123  812961 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1226 22:37:17.430824  812961 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1226 22:37:17.492887  812961 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1226 22:37:17.493044  812961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/missing-upgrade-099766/config.json ...
	I1226 22:37:17.493209  812961 cache.go:107] acquiring lock: {Name:mkb0c415fb66519dc4d25de2f2e85a3d2941a136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:37:17.493342  812961 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1226 22:37:17.493371  812961 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 178.613µs
	I1226 22:37:17.493397  812961 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1226 22:37:17.493416  812961 cache.go:107] acquiring lock: {Name:mk0889c5c52f22b229acaa40a5f123bee3f71fb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:37:17.493436  812961 cache.go:107] acquiring lock: {Name:mk6dd42b4f5730263f9316f596eb75b275e6a548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:37:17.493631  812961 cache.go:107] acquiring lock: {Name:mka92e10ddf463094441c8aa2f9b2d35bcc9ef4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:37:17.493680  812961 cache.go:107] acquiring lock: {Name:mk1bd0e4578a8b4054cf01e8271f46e2f31f33ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:37:17.493884  812961 cache.go:107] acquiring lock: {Name:mk7865403d3c41f6f816dbb1cbc2462d431f3f1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:37:17.494073  812961 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1226 22:37:17.493973  812961 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1226 22:37:17.493997  812961 cache.go:107] acquiring lock: {Name:mk3b7a098e6afe4fbdba7020b7ff9912a911d97a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:37:17.494308  812961 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1226 22:37:17.494343  812961 cache.go:107] acquiring lock: {Name:mke4ec52c0c8d7fadbdda753d37368ed659c96e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:37:17.495047  812961 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1226 22:37:17.495407  812961 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1226 22:37:17.495984  812961 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1226 22:37:17.496281  812961 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1226 22:37:17.496600  812961 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1226 22:37:17.496874  812961 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1226 22:37:17.497522  812961 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1226 22:37:17.497820  812961 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1226 22:37:17.498116  812961 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1226 22:37:17.498630  812961 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1226 22:37:17.499214  812961 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	W1226 22:37:17.845317  812961 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1226 22:37:17.845449  812961 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I1226 22:37:17.848138  812961 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W1226 22:37:17.853430  812961 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1226 22:37:17.853511  812961 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1226 22:37:17.860372  812961 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1226 22:37:17.868722  812961 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W1226 22:37:17.876215  812961 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1226 22:37:17.876298  812961 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1226 22:37:17.885998  812961 cache.go:162] opening:  /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1226 22:37:17.952776  812961 cache.go:157] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1226 22:37:17.952800  812961 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 459.364782ms
	I1226 22:37:17.952811  812961 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  4.33 MiB / 287.99 MiB [>_] 1.50% ? p/s ?I1226 22:37:18.269540  812961 cache.go:157] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1226 22:37:18.269575  812961 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 775.5793ms
	I1226 22:37:18.269590  812961 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1226 22:37:18.362246  812961 cache.go:157] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1226 22:37:18.364697  812961 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 870.341249ms
	I1226 22:37:18.364764  812961 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  22.30 MiB / 287.99 MiB [>] 7.74% ? p/s ?I1226 22:37:18.588652  812961 cache.go:157] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1226 22:37:18.588678  812961 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.094822309s
	I1226 22:37:18.588691  812961 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1226 22:37:18.589772  812961 cache.go:157] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1226 22:37:18.589798  812961 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.096170602s
	I1226 22:37:18.589810  812961 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.16 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.16 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 43.16 MiB     > gcr.io/k8s-minikube/kicbase...:  39.72 MiB / 287.99 MiB  13.79% 41.85 MiBI1226 22:37:19.255161  812961 cache.go:157] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1226 22:37:19.255231  812961 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 1.761818779s
	I1226 22:37:19.255333  812961 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  43.95 MiB / 287.99 MiB  15.26% 41.85 MiB    > gcr.io/k8s-minikube/kicbase...:  60.04 MiB / 287.99 MiB  20.85% 41.85 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 42.18 MiB    > gcr.io/k8s-minikube/kicbase...:  82.53 MiB / 287.99 MiB  28.66% 42.18 MiB    > gcr.io/k8s-minikube/kicbase...:  100.43 MiB / 287.99 MiB  34.87% 42.18 Mi    > gcr.io/k8s-minikube/kicbase...:  117.26 MiB / 287.99 MiB  40.72% 44.78 Mi    > gcr.io/k8s-minikube/kicbase...:  137.15 MiB / 287.99 MiB  47.62% 44.78 Mi    > gcr.io/k8s-minikube/kicbase...:  154.82 MiB / 287.99 MiB  53.76% 44.78 MiI1226 22:37:20.955856  812961 cache.go:157] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1226 22:37:20.955883  812961 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 3.46222979s
	I1226 22:37:20.955897  812961 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1226 22:37:20.955909  812961 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 47.75 Mi    > gcr.io/k8s-minikube/kicbase...:  180.07 MiB / 287.99 MiB  62.53% 47.75 Mi    > gcr.io/k8s-minikube/kicbase...:  199.46 MiB / 287.99 MiB  69.26% 47.75 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 48.75 Mi    > gcr.io/k8s-minikube/kicbase...:  217.68 MiB / 287.99 MiB  75.59% 48.75 Mi    > gcr.io/k8s-minikube/kicbase...:  237.35 MiB / 287.99 MiB  82.42% 48.75 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 48.66 Mi    > gcr.io/k8s-minikube/kicbase...:  250.62 MiB / 287.99 MiB  87.02% 48.66 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 48.66 Mi    > gcr.io/k8s-minikube/kicbase...:  270.01 MiB / 287.99 MiB  93.76% 48.95 Mi    > gcr.io/k8s-minikube/kicbase...:  283.99 MiB / 287.99 MiB  98.61% 48.95 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.95 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.
99% 47.72 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 47.72 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 47.72 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 44.65 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 44.65 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 45.56 MI1226 22:37:24.370672  812961 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1226 22:37:24.370708  812961 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1226 22:37:25.273218  812961 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1226 22:37:25.273265  812961 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:37:25.273328  812961 start.go:365] acquiring machines lock for missing-upgrade-099766: {Name:mk404dcdd52cdd4b037a09ce2b54e5c65da158c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:37:25.273408  812961 start.go:369] acquired machines lock for "missing-upgrade-099766" in 56.549µs
	I1226 22:37:25.273432  812961 start.go:96] Skipping create...Using existing machine configuration
	I1226 22:37:25.273439  812961 fix.go:54] fixHost starting: 
	I1226 22:37:25.273716  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:25.290363  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:25.290445  812961 fix.go:102] recreateIfNeeded on missing-upgrade-099766: state= err=unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:25.290468  812961 fix.go:107] machineExists: false. err=machine does not exist
	I1226 22:37:25.292812  812961 out.go:177] * docker "missing-upgrade-099766" container is missing, will recreate.
	I1226 22:37:25.294629  812961 delete.go:124] DEMOLISHING missing-upgrade-099766 ...
	I1226 22:37:25.294723  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:25.311918  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	W1226 22:37:25.311998  812961 stop.go:75] unable to get state: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:25.312016  812961 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:25.312484  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:25.332996  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:25.333061  812961 delete.go:82] Unable to get host status for missing-upgrade-099766, assuming it has already been deleted: state: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:25.333144  812961 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-099766
	W1226 22:37:25.350232  812961 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-099766 returned with exit code 1
	I1226 22:37:25.350267  812961 kic.go:371] could not find the container missing-upgrade-099766 to remove it. will try anyways
	I1226 22:37:25.350349  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:25.370468  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	W1226 22:37:25.370525  812961 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:25.370601  812961 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-099766 /bin/bash -c "sudo init 0"
	W1226 22:37:25.388385  812961 cli_runner.go:211] docker exec --privileged -t missing-upgrade-099766 /bin/bash -c "sudo init 0" returned with exit code 1
	I1226 22:37:25.388419  812961 oci.go:650] error shutdown missing-upgrade-099766: docker exec --privileged -t missing-upgrade-099766 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:26.389256  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:26.410018  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:26.410078  812961 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:26.410090  812961 oci.go:664] temporary error: container missing-upgrade-099766 status is  but expect it to be exited
	I1226 22:37:26.410119  812961 retry.go:31] will retry after 562.71032ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:26.973393  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:26.990267  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:26.990326  812961 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:26.990348  812961 oci.go:664] temporary error: container missing-upgrade-099766 status is  but expect it to be exited
	I1226 22:37:26.990375  812961 retry.go:31] will retry after 416.319107ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:27.407021  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:27.429252  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:27.429313  812961 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:27.429325  812961 oci.go:664] temporary error: container missing-upgrade-099766 status is  but expect it to be exited
	I1226 22:37:27.429349  812961 retry.go:31] will retry after 1.073658252s: couldn't verify container is exited. %v: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:28.503689  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:28.522873  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:28.522931  812961 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:28.522949  812961 oci.go:664] temporary error: container missing-upgrade-099766 status is  but expect it to be exited
	I1226 22:37:28.522974  812961 retry.go:31] will retry after 2.233548974s: couldn't verify container is exited. %v: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:30.756723  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:30.777215  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:30.777283  812961 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:30.777303  812961 oci.go:664] temporary error: container missing-upgrade-099766 status is  but expect it to be exited
	I1226 22:37:30.777328  812961 retry.go:31] will retry after 3.378978745s: couldn't verify container is exited. %v: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:34.156701  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:34.172976  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:34.173031  812961 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:34.173039  812961 oci.go:664] temporary error: container missing-upgrade-099766 status is  but expect it to be exited
	I1226 22:37:34.173064  812961 retry.go:31] will retry after 3.569060062s: couldn't verify container is exited. %v: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:37.742321  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:37.775398  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:37.775454  812961 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:37.775462  812961 oci.go:664] temporary error: container missing-upgrade-099766 status is  but expect it to be exited
	I1226 22:37:37.779920  812961 retry.go:31] will retry after 7.594276721s: couldn't verify container is exited. %v: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:45.376672  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:45.414320  812961 cli_runner.go:211] docker container inspect missing-upgrade-099766 --format={{.State.Status}} returned with exit code 1
	I1226 22:37:45.414386  812961 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	I1226 22:37:45.414405  812961 oci.go:664] temporary error: container missing-upgrade-099766 status is  but expect it to be exited
	I1226 22:37:45.414438  812961 oci.go:88] couldn't shut down missing-upgrade-099766 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-099766": docker container inspect missing-upgrade-099766 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-099766
	 
	I1226 22:37:45.414503  812961 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-099766
	I1226 22:37:45.436333  812961 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-099766
	W1226 22:37:45.457985  812961 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-099766 returned with exit code 1
	I1226 22:37:45.458077  812961 cli_runner.go:164] Run: docker network inspect missing-upgrade-099766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:37:45.492796  812961 cli_runner.go:164] Run: docker network rm missing-upgrade-099766
	I1226 22:37:45.648389  812961 fix.go:114] Sleeping 1 second for extra luck!
	I1226 22:37:46.648541  812961 start.go:125] createHost starting for "" (driver="docker")
	I1226 22:37:46.664418  812961 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1226 22:37:46.664603  812961 start.go:159] libmachine.API.Create for "missing-upgrade-099766" (driver="docker")
	I1226 22:37:46.664634  812961 client.go:168] LocalClient.Create starting
	I1226 22:37:46.664690  812961 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem
	I1226 22:37:46.664726  812961 main.go:141] libmachine: Decoding PEM data...
	I1226 22:37:46.664749  812961 main.go:141] libmachine: Parsing certificate...
	I1226 22:37:46.664813  812961 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem
	I1226 22:37:46.664841  812961 main.go:141] libmachine: Decoding PEM data...
	I1226 22:37:46.664856  812961 main.go:141] libmachine: Parsing certificate...
	I1226 22:37:46.665100  812961 cli_runner.go:164] Run: docker network inspect missing-upgrade-099766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1226 22:37:46.689781  812961 cli_runner.go:211] docker network inspect missing-upgrade-099766 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1226 22:37:46.689869  812961 network_create.go:281] running [docker network inspect missing-upgrade-099766] to gather additional debugging logs...
	I1226 22:37:46.689886  812961 cli_runner.go:164] Run: docker network inspect missing-upgrade-099766
	W1226 22:37:46.713475  812961 cli_runner.go:211] docker network inspect missing-upgrade-099766 returned with exit code 1
	I1226 22:37:46.713504  812961 network_create.go:284] error running [docker network inspect missing-upgrade-099766]: docker network inspect missing-upgrade-099766: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-099766 not found
	I1226 22:37:46.713517  812961 network_create.go:286] output of [docker network inspect missing-upgrade-099766]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-099766 not found
	
	** /stderr **
	I1226 22:37:46.713619  812961 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1226 22:37:46.739912  812961 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d0b2e7e17d50 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:7d:93:49:57} reservation:<nil>}
	I1226 22:37:46.740701  812961 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cb22699b10d7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:f6:1e:82:ea} reservation:<nil>}
	I1226 22:37:46.741112  812961 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50429a1e2500 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ac:24:3d:72} reservation:<nil>}
	I1226 22:37:46.742480  812961 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40035ffe30}
	I1226 22:37:46.742514  812961 network_create.go:124] attempt to create docker network missing-upgrade-099766 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1226 22:37:46.742602  812961 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-099766 missing-upgrade-099766
	I1226 22:37:46.837740  812961 network_create.go:108] docker network missing-upgrade-099766 192.168.76.0/24 created
	I1226 22:37:46.837770  812961 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-099766" container
	I1226 22:37:46.837843  812961 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1226 22:37:46.865505  812961 cli_runner.go:164] Run: docker volume create missing-upgrade-099766 --label name.minikube.sigs.k8s.io=missing-upgrade-099766 --label created_by.minikube.sigs.k8s.io=true
	I1226 22:37:46.883494  812961 oci.go:103] Successfully created a docker volume missing-upgrade-099766
	I1226 22:37:46.883590  812961 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-099766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-099766 --entrypoint /usr/bin/test -v missing-upgrade-099766:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1226 22:37:47.920040  812961 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-099766-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-099766 --entrypoint /usr/bin/test -v missing-upgrade-099766:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.036408294s)
	I1226 22:37:47.920075  812961 oci.go:107] Successfully prepared a docker volume missing-upgrade-099766
	I1226 22:37:47.920100  812961 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1226 22:37:47.920258  812961 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1226 22:37:47.920379  812961 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1226 22:37:48.034752  812961 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-099766 --name missing-upgrade-099766 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-099766 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-099766 --network missing-upgrade-099766 --ip 192.168.76.2 --volume missing-upgrade-099766:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1226 22:37:48.542470  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Running}}
	I1226 22:37:48.583151  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	I1226 22:37:48.627683  812961 cli_runner.go:164] Run: docker exec missing-upgrade-099766 stat /var/lib/dpkg/alternatives/iptables
	I1226 22:37:48.737762  812961 oci.go:144] the created container "missing-upgrade-099766" has a running status.
	I1226 22:37:48.737789  812961 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa...
	I1226 22:37:49.386925  812961 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1226 22:37:49.418427  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	I1226 22:37:49.458728  812961 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1226 22:37:49.458752  812961 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-099766 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1226 22:37:49.587265  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	I1226 22:37:49.639050  812961 machine.go:88] provisioning docker machine ...
	I1226 22:37:49.639082  812961 ubuntu.go:169] provisioning hostname "missing-upgrade-099766"
	I1226 22:37:49.639177  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:49.691498  812961 main.go:141] libmachine: Using SSH client type: native
	I1226 22:37:49.691938  812961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1226 22:37:49.691961  812961 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-099766 && echo "missing-upgrade-099766" | sudo tee /etc/hostname
	I1226 22:37:49.950208  812961 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-099766
	
	I1226 22:37:49.950288  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:49.989361  812961 main.go:141] libmachine: Using SSH client type: native
	I1226 22:37:49.989757  812961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1226 22:37:49.989780  812961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-099766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-099766/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-099766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:37:50.167184  812961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:37:50.167212  812961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 22:37:50.167231  812961 ubuntu.go:177] setting up certificates
	I1226 22:37:50.167241  812961 provision.go:83] configureAuth start
	I1226 22:37:50.167317  812961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-099766
	I1226 22:37:50.200119  812961 provision.go:138] copyHostCerts
	I1226 22:37:50.200205  812961 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem, removing ...
	I1226 22:37:50.200220  812961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:37:50.200306  812961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 22:37:50.200410  812961 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem, removing ...
	I1226 22:37:50.200425  812961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:37:50.200456  812961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 22:37:50.200556  812961 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem, removing ...
	I1226 22:37:50.200584  812961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:37:50.200632  812961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 22:37:50.200728  812961 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-099766 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-099766]
	I1226 22:37:50.978662  812961 provision.go:172] copyRemoteCerts
	I1226 22:37:50.978732  812961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:37:50.978782  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:50.997460  812961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa Username:docker}
	I1226 22:37:51.102680  812961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:37:51.130869  812961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1226 22:37:51.159810  812961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:37:51.183706  812961 provision.go:86] duration metric: configureAuth took 1.01644601s
	I1226 22:37:51.183732  812961 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:37:51.183927  812961 config.go:182] Loaded profile config "missing-upgrade-099766": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1226 22:37:51.184047  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:51.203408  812961 main.go:141] libmachine: Using SSH client type: native
	I1226 22:37:51.203831  812961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1226 22:37:51.203853  812961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:37:51.758522  812961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:37:51.758544  812961 machine.go:91] provisioned docker machine in 2.119471237s
	I1226 22:37:51.758553  812961 client.go:171] LocalClient.Create took 5.093913922s
	I1226 22:37:51.758566  812961 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-099766" took 5.093965622s
	I1226 22:37:51.758574  812961 start.go:300] post-start starting for "missing-upgrade-099766" (driver="docker")
	I1226 22:37:51.758583  812961 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:37:51.758654  812961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:37:51.758698  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:51.786639  812961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa Username:docker}
	I1226 22:37:51.902985  812961 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:37:51.914112  812961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:37:51.914142  812961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:37:51.914154  812961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:37:51.914161  812961 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1226 22:37:51.914175  812961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 22:37:51.914237  812961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 22:37:51.914330  812961 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> 7030362.pem in /etc/ssl/certs
	I1226 22:37:51.914437  812961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:37:51.930074  812961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:37:51.968362  812961 start.go:303] post-start completed in 209.774869ms
	I1226 22:37:51.968899  812961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-099766
	I1226 22:37:51.990548  812961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/missing-upgrade-099766/config.json ...
	I1226 22:37:51.990826  812961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:37:51.990867  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:52.022499  812961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa Username:docker}
	I1226 22:37:52.137916  812961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:37:52.150607  812961 start.go:128] duration metric: createHost completed in 5.502033238s
	I1226 22:37:52.150701  812961 cli_runner.go:164] Run: docker container inspect missing-upgrade-099766 --format={{.State.Status}}
	W1226 22:37:52.197310  812961 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 22:37:52.197338  812961 machine.go:88] provisioning docker machine ...
	I1226 22:37:52.197355  812961 ubuntu.go:169] provisioning hostname "missing-upgrade-099766"
	I1226 22:37:52.197419  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:52.247667  812961 main.go:141] libmachine: Using SSH client type: native
	I1226 22:37:52.248101  812961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1226 22:37:52.248113  812961 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-099766 && echo "missing-upgrade-099766" | sudo tee /etc/hostname
	I1226 22:37:52.446316  812961 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-099766
	
	I1226 22:37:52.446460  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:52.481888  812961 main.go:141] libmachine: Using SSH client type: native
	I1226 22:37:52.482290  812961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1226 22:37:52.482308  812961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-099766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-099766/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-099766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:37:52.656923  812961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:37:52.656989  812961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 22:37:52.657020  812961 ubuntu.go:177] setting up certificates
	I1226 22:37:52.657065  812961 provision.go:83] configureAuth start
	I1226 22:37:52.657145  812961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-099766
	I1226 22:37:52.719345  812961 provision.go:138] copyHostCerts
	I1226 22:37:52.719408  812961 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem, removing ...
	I1226 22:37:52.719416  812961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:37:52.719491  812961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 22:37:52.719578  812961 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem, removing ...
	I1226 22:37:52.719583  812961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:37:52.719612  812961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 22:37:52.719663  812961 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem, removing ...
	I1226 22:37:52.719668  812961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:37:52.719689  812961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 22:37:52.719730  812961 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-099766 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-099766]
	I1226 22:37:53.402196  812961 provision.go:172] copyRemoteCerts
	I1226 22:37:53.402308  812961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:37:53.402386  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:53.421169  812961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa Username:docker}
	I1226 22:37:53.525600  812961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:37:53.567792  812961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:37:53.614489  812961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1226 22:37:53.662631  812961 provision.go:86] duration metric: configureAuth took 1.005535077s
	I1226 22:37:53.662695  812961 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:37:53.662917  812961 config.go:182] Loaded profile config "missing-upgrade-099766": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1226 22:37:53.663067  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:53.688950  812961 main.go:141] libmachine: Using SSH client type: native
	I1226 22:37:53.689346  812961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33845 <nil> <nil>}
	I1226 22:37:53.689361  812961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:37:54.114746  812961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:37:54.114812  812961 machine.go:91] provisioned docker machine in 1.917465141s
	I1226 22:37:54.114837  812961 start.go:300] post-start starting for "missing-upgrade-099766" (driver="docker")
	I1226 22:37:54.114868  812961 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:37:54.114978  812961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:37:54.115046  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:54.140859  812961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa Username:docker}
	I1226 22:37:54.242564  812961 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:37:54.247054  812961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:37:54.247076  812961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:37:54.247087  812961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:37:54.247095  812961 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1226 22:37:54.247105  812961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 22:37:54.247158  812961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 22:37:54.247235  812961 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> 7030362.pem in /etc/ssl/certs
	I1226 22:37:54.247336  812961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:37:54.257055  812961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:37:54.282236  812961 start.go:303] post-start completed in 167.370627ms
	I1226 22:37:54.282354  812961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:37:54.282428  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:54.305953  812961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa Username:docker}
	I1226 22:37:54.403713  812961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:37:54.410290  812961 fix.go:56] fixHost completed within 29.13684373s
	I1226 22:37:54.410314  812961 start.go:83] releasing machines lock for "missing-upgrade-099766", held for 29.136891605s
	I1226 22:37:54.410396  812961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-099766
	I1226 22:37:54.429652  812961 ssh_runner.go:195] Run: cat /version.json
	I1226 22:37:54.429703  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:54.429933  812961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:37:54.430002  812961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-099766
	I1226 22:37:54.462381  812961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa Username:docker}
	I1226 22:37:54.489286  812961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33845 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/missing-upgrade-099766/id_rsa Username:docker}
	W1226 22:37:54.569791  812961 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1226 22:37:54.569941  812961 ssh_runner.go:195] Run: systemctl --version
	I1226 22:37:54.708072  812961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:37:54.812907  812961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:37:54.818986  812961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:37:54.845632  812961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:37:54.845706  812961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:37:54.879751  812961 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 22:37:54.879772  812961 start.go:475] detecting cgroup driver to use...
	I1226 22:37:54.879805  812961 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:37:54.879857  812961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:37:54.931209  812961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:37:54.945052  812961 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:37:54.945165  812961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:37:54.958664  812961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:37:54.977479  812961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1226 22:37:54.996276  812961 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1226 22:37:54.996385  812961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:37:55.147509  812961 docker.go:219] disabling docker service ...
	I1226 22:37:55.147639  812961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:37:55.162973  812961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:37:55.178777  812961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:37:55.310568  812961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:37:55.449710  812961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:37:55.464561  812961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:37:55.483211  812961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1226 22:37:55.483332  812961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:37:55.497502  812961 out.go:177] 
	W1226 22:37:55.499211  812961 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1226 22:37:55.499377  812961 out.go:239] * 
	* 
	W1226 22:37:55.500925  812961 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 22:37:55.503219  812961 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-099766 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-12-26 22:37:55.56020546 +0000 UTC m=+3196.213180323
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-099766
helpers_test.go:235: (dbg) docker inspect missing-upgrade-099766:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5a7893d4c0112ade9ebe8e771885e3806f75246b4bebcd1969ec9fd1f66427c",
	        "Created": "2023-12-26T22:37:48.06778365Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 814791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-26T22:37:48.533689636Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/c5a7893d4c0112ade9ebe8e771885e3806f75246b4bebcd1969ec9fd1f66427c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5a7893d4c0112ade9ebe8e771885e3806f75246b4bebcd1969ec9fd1f66427c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5a7893d4c0112ade9ebe8e771885e3806f75246b4bebcd1969ec9fd1f66427c/hosts",
	        "LogPath": "/var/lib/docker/containers/c5a7893d4c0112ade9ebe8e771885e3806f75246b4bebcd1969ec9fd1f66427c/c5a7893d4c0112ade9ebe8e771885e3806f75246b4bebcd1969ec9fd1f66427c-json.log",
	        "Name": "/missing-upgrade-099766",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-099766:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-099766",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bf279819a512a7da17c4b68679c547b1c863bc057e4c8c203db0b6b75850201e-init/diff:/var/lib/docker/overlay2/aff80e67ef37084c95001ba1645fe003d59f75e0ac4ce6a97eaf5e98ca65986b/diff:/var/lib/docker/overlay2/6ddce3370d95189066a22476d88c1541cb0053e2f84f582ff4704673902295c9/diff:/var/lib/docker/overlay2/d1f4d59aa7e872cbc3462c8583489cf532d1b53f20bf792444a233e424728358/diff:/var/lib/docker/overlay2/924e786130c3c08539b8c35c04e648a4d4490cda5394ad9c8dcc9fb8688d0576/diff:/var/lib/docker/overlay2/de5355dbb6d1afebebf1c3338bb059ec8c542ab19dc64da6688e47bafaf61de6/diff:/var/lib/docker/overlay2/2f761bb26857a7789d517afd4f68a91b2bcd5dfd2fb4dc5d325f05266ae224cd/diff:/var/lib/docker/overlay2/f9635816969bd536ced80b00d0ccbac1d57f30a4750c97d1d59e18c872584ec6/diff:/var/lib/docker/overlay2/0ad6022370e0a5c7b53b6d516417c3c0016557e4c6f3eb5ca88c78e5dca89e23/diff:/var/lib/docker/overlay2/c1af72edaeb4fb107cd7fc958d16b5038a615b1a64a3c5d67cedea59f7f62fc5/diff:/var/lib/docker/overlay2/0c8819
30478d789e0d20557da430b479a8cb8b2859dd7f568ecce4d99859ab27/diff:/var/lib/docker/overlay2/6076fe703e0d6b736ab2dbb1999793b641e40a7a22ae9ebe6f620e16d939d652/diff:/var/lib/docker/overlay2/add2acad624e2e6e55beace00c94b0828eaa2e490f657909bcec77a7f5fdc97f/diff:/var/lib/docker/overlay2/214f39a4d729ec5a77bd514bb457f4b0846f2837488cd8a3273168b280833952/diff:/var/lib/docker/overlay2/11f3ac138f7599824b3e533432e0daaee1a307fba67aa95f20dabbf60eb2a3f0/diff:/var/lib/docker/overlay2/9edd13f066493f4d25ee0afa6ab887b0ff8fcac6b8c19279096bb5b12c9b2d2c/diff:/var/lib/docker/overlay2/2a42b0990c72e84b8c59d44672039bb2b1c023183b070ff8866c5051f3211903/diff:/var/lib/docker/overlay2/d5e40271e48a7a01b8d53cc0e8aa22b6c1433b04a8939a62904ffa4c9901a1b8/diff:/var/lib/docker/overlay2/e4c93f5299e84425537516dd398ce898ef0ad2f41f9ec69a5ef2237e1b17c7d5/diff:/var/lib/docker/overlay2/44efd82990bd64c9dd316bd1b3281e37124a5a8e7a86465b851a4f0dffd3b731/diff:/var/lib/docker/overlay2/11ce1b3a1f58593ab02eaf7e259a4e197c6233e42c5521a471d61808c0ea6875/diff:/var/lib/d
ocker/overlay2/f96972bda05370b5933e8c1a6a2055f27c0c2b2683f3eed5b9f314478b523a9d/diff:/var/lib/docker/overlay2/d04a2f6bae9f32da9e60d3461e185eff86e42a99112f64632157a39046774dd7/diff:/var/lib/docker/overlay2/0da1bfdb1d6e6c5df59981fad0b86d2717ee4880a398794dbe83a6098739dbf0/diff:/var/lib/docker/overlay2/8e5d7dd942e4af05722a4e840d48ca91f9f23db271d8aa6627999d29412142a9/diff:/var/lib/docker/overlay2/34ea282cd38cee4123ba00375f826fad5408f882029906977e60135dcca06f81/diff:/var/lib/docker/overlay2/a7b2d6d927c131d009c54c5c70057dbfceef4620a0d6f618b6887e852d9c2e63/diff:/var/lib/docker/overlay2/79f1494ca8635517457b1f9669bfe6b18daa80238494b0b36f00d9735081a331/diff:/var/lib/docker/overlay2/cba58c07ebbd990eedd7dd338d42d011f94086a86eeb2638c2c480ba577efaae/diff:/var/lib/docker/overlay2/63aa4aac79801b4f220cad13b579dcd594bbedb0bb3f2076eef62a17deb653d4/diff:/var/lib/docker/overlay2/c43be29e78d2f745704d824fe4ca4790f46e4c44c6759538c6a70d0be3637166/diff:/var/lib/docker/overlay2/67eff87ade5ffdbdbff902912bc71632c082aac573ee6d60fc3522d5693
ef129/diff:/var/lib/docker/overlay2/4087b4d9b7a4653df5e0a6b6b890c96e1e1add3e42e2bdb9b51f8caa8579aa7d/diff:/var/lib/docker/overlay2/ef94ff6fc8983bbc46d063de982253af5c9f6382e2867b04b451ab193d2e7ddf/diff:/var/lib/docker/overlay2/ebd2b40ad654fb1dd4dbb19c4e52f582aa95361bdcf8431874cfb1b98bef6005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf279819a512a7da17c4b68679c547b1c863bc057e4c8c203db0b6b75850201e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf279819a512a7da17c4b68679c547b1c863bc057e4c8c203db0b6b75850201e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf279819a512a7da17c4b68679c547b1c863bc057e4c8c203db0b6b75850201e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-099766",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-099766/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-099766",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-099766",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-099766",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4a3662a80066f474de267f6c462869e41017e081f67135bc45d20c3a7232cdc2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4a3662a80066",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-099766": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c5a7893d4c01",
	                        "missing-upgrade-099766"
	                    ],
	                    "NetworkID": "9c019f2ca305fe2314d181057b7976564b349790d1b79e1b37db3270113f2bc5",
	                    "EndpointID": "d5699f854e09284ccc6e34dddbdac7d3bf861a93e20c05e11e59eaf8dd44c7ea",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-099766 -n missing-upgrade-099766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-099766 -n missing-upgrade-099766: exit status 6 (379.511587ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 22:37:55.979253  815978 status.go:415] kubeconfig endpoint: got: 192.168.59.183:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-099766" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-099766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-099766
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-099766: (1.894811286s)
--- FAIL: TestMissingContainerUpgrade (182.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (93.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.1716102515.exe start -p stopped-upgrade-572640 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1226 22:38:11.961434  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.1716102515.exe start -p stopped-upgrade-572640 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m6.557820932s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.1716102515.exe -p stopped-upgrade-572640 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.1716102515.exe -p stopped-upgrade-572640 stop: (20.336833309s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-572640 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1226 22:39:26.121787  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-572640 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.358138624s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-572640] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-572640 in cluster stopped-upgrade-572640
	* Pulling base image v0.0.42-1703498848-17857 ...
	* Restarting existing docker container for "stopped-upgrade-572640" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:39:25.990478  820156 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:39:25.990607  820156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:39:25.990616  820156 out.go:309] Setting ErrFile to fd 2...
	I1226 22:39:25.990621  820156 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:39:25.990871  820156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:39:25.991251  820156 out.go:303] Setting JSON to false
	I1226 22:39:25.992163  820156 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22900,"bootTime":1703607466,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:39:25.992256  820156 start.go:138] virtualization:  
	I1226 22:39:25.994884  820156 out.go:177] * [stopped-upgrade-572640] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 22:39:25.996775  820156 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:39:25.996879  820156 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1226 22:39:25.996928  820156 notify.go:220] Checking for updates...
	I1226 22:39:26.002174  820156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:39:26.006125  820156 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:39:26.008380  820156 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:39:26.010395  820156 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:39:26.012573  820156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:39:26.014905  820156 config.go:182] Loaded profile config "stopped-upgrade-572640": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1226 22:39:26.017406  820156 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1226 22:39:26.019307  820156 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:39:26.047811  820156 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:39:26.047928  820156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:39:26.102470  820156 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1226 22:39:26.155521  820156 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-26 22:39:26.145228766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:39:26.155622  820156 docker.go:295] overlay module found
	I1226 22:39:26.158360  820156 out.go:177] * Using the docker driver based on existing profile
	I1226 22:39:26.160165  820156 start.go:298] selected driver: docker
	I1226 22:39:26.160180  820156 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-572640 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-572640 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.225 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 22:39:26.160266  820156 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:39:26.161044  820156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:39:26.244775  820156 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-26 22:39:26.235089476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:39:26.245121  820156 cni.go:84] Creating CNI manager for ""
	I1226 22:39:26.245143  820156 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 22:39:26.245157  820156 start_flags.go:323] config:
	{Name:stopped-upgrade-572640 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-572640 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.225 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1226 22:39:26.247725  820156 out.go:177] * Starting control plane node stopped-upgrade-572640 in cluster stopped-upgrade-572640
	I1226 22:39:26.249949  820156 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 22:39:26.252001  820156 out.go:177] * Pulling base image v0.0.42-1703498848-17857 ...
	I1226 22:39:26.253814  820156 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1226 22:39:26.253915  820156 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1226 22:39:26.273133  820156 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1226 22:39:26.273158  820156 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1226 22:39:26.322391  820156 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1226 22:39:26.323311  820156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/stopped-upgrade-572640/config.json ...
	I1226 22:39:26.323446  820156 cache.go:107] acquiring lock: {Name:mkb0c415fb66519dc4d25de2f2e85a3d2941a136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:39:26.323534  820156 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1226 22:39:26.323543  820156 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 103.71µs
	I1226 22:39:26.323558  820156 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1226 22:39:26.323569  820156 cache.go:194] Successfully downloaded all kic artifacts
	I1226 22:39:26.323569  820156 cache.go:107] acquiring lock: {Name:mka92e10ddf463094441c8aa2f9b2d35bcc9ef4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:39:26.323601  820156 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1226 22:39:26.323608  820156 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 38.301µs
	I1226 22:39:26.323604  820156 start.go:365] acquiring machines lock for stopped-upgrade-572640: {Name:mkfc500a42c7cbb8568664fa60f291b8607a2a33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:39:26.323615  820156 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1226 22:39:26.323624  820156 cache.go:107] acquiring lock: {Name:mk7865403d3c41f6f816dbb1cbc2462d431f3f1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:39:26.323645  820156 start.go:369] acquired machines lock for "stopped-upgrade-572640" in 27.06µs
	I1226 22:39:26.323652  820156 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1226 22:39:26.323658  820156 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 34.543µs
	I1226 22:39:26.323664  820156 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1226 22:39:26.323667  820156 start.go:96] Skipping create...Using existing machine configuration
	I1226 22:39:26.323674  820156 fix.go:54] fixHost starting: 
	I1226 22:39:26.323673  820156 cache.go:107] acquiring lock: {Name:mk3b7a098e6afe4fbdba7020b7ff9912a911d97a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:39:26.323698  820156 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1226 22:39:26.323702  820156 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 30.424µs
	I1226 22:39:26.323708  820156 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1226 22:39:26.323719  820156 cache.go:107] acquiring lock: {Name:mk0889c5c52f22b229acaa40a5f123bee3f71fb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:39:26.323745  820156 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1226 22:39:26.323751  820156 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 33.123µs
	I1226 22:39:26.323757  820156 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1226 22:39:26.323766  820156 cache.go:107] acquiring lock: {Name:mk6dd42b4f5730263f9316f596eb75b275e6a548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:39:26.323789  820156 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1226 22:39:26.323793  820156 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 28.734µs
	I1226 22:39:26.323799  820156 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1226 22:39:26.323807  820156 cache.go:107] acquiring lock: {Name:mk1bd0e4578a8b4054cf01e8271f46e2f31f33ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:39:26.323829  820156 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1226 22:39:26.323834  820156 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 27.946µs
	I1226 22:39:26.323840  820156 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1226 22:39:26.323848  820156 cache.go:107] acquiring lock: {Name:mke4ec52c0c8d7fadbdda753d37368ed659c96e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1226 22:39:26.323871  820156 cache.go:115] /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1226 22:39:26.323876  820156 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 29.242µs
	I1226 22:39:26.323882  820156 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1226 22:39:26.323887  820156 cache.go:87] Successfully saved all images to host disk.
	I1226 22:39:26.323931  820156 cli_runner.go:164] Run: docker container inspect stopped-upgrade-572640 --format={{.State.Status}}
	I1226 22:39:26.346243  820156 fix.go:102] recreateIfNeeded on stopped-upgrade-572640: state=Stopped err=<nil>
	W1226 22:39:26.346272  820156 fix.go:128] unexpected machine state, will restart: <nil>
	I1226 22:39:26.348545  820156 out.go:177] * Restarting existing docker container for "stopped-upgrade-572640" ...
	I1226 22:39:26.350363  820156 cli_runner.go:164] Run: docker start stopped-upgrade-572640
	I1226 22:39:26.673419  820156 cli_runner.go:164] Run: docker container inspect stopped-upgrade-572640 --format={{.State.Status}}
	I1226 22:39:26.694691  820156 kic.go:430] container "stopped-upgrade-572640" state is running.
	I1226 22:39:26.695075  820156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-572640
	I1226 22:39:26.718533  820156 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/stopped-upgrade-572640/config.json ...
	I1226 22:39:26.718779  820156 machine.go:88] provisioning docker machine ...
	I1226 22:39:26.718802  820156 ubuntu.go:169] provisioning hostname "stopped-upgrade-572640"
	I1226 22:39:26.718862  820156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-572640
	I1226 22:39:26.746026  820156 main.go:141] libmachine: Using SSH client type: native
	I1226 22:39:26.747225  820156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33853 <nil> <nil>}
	I1226 22:39:26.747248  820156 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-572640 && echo "stopped-upgrade-572640" | sudo tee /etc/hostname
	I1226 22:39:26.747835  820156 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40968->127.0.0.1:33853: read: connection reset by peer
	I1226 22:39:29.902229  820156 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-572640
	
	I1226 22:39:29.902305  820156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-572640
	I1226 22:39:29.920945  820156 main.go:141] libmachine: Using SSH client type: native
	I1226 22:39:29.921349  820156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33853 <nil> <nil>}
	I1226 22:39:29.921373  820156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-572640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-572640/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-572640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1226 22:39:30.070456  820156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1226 22:39:30.070488  820156 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17857-697646/.minikube CaCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17857-697646/.minikube}
	I1226 22:39:30.070520  820156 ubuntu.go:177] setting up certificates
	I1226 22:39:30.070530  820156 provision.go:83] configureAuth start
	I1226 22:39:30.070610  820156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-572640
	I1226 22:39:30.092967  820156 provision.go:138] copyHostCerts
	I1226 22:39:30.093089  820156 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem, removing ...
	I1226 22:39:30.093098  820156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem
	I1226 22:39:30.093183  820156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/ca.pem (1082 bytes)
	I1226 22:39:30.093298  820156 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem, removing ...
	I1226 22:39:30.093304  820156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem
	I1226 22:39:30.093339  820156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/cert.pem (1123 bytes)
	I1226 22:39:30.093713  820156 exec_runner.go:144] found /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem, removing ...
	I1226 22:39:30.093730  820156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem
	I1226 22:39:30.093772  820156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17857-697646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17857-697646/.minikube/key.pem (1679 bytes)
	I1226 22:39:30.093858  820156 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-572640 san=[192.168.59.225 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-572640]
	I1226 22:39:30.291951  820156 provision.go:172] copyRemoteCerts
	I1226 22:39:30.292026  820156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1226 22:39:30.292068  820156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-572640
	I1226 22:39:30.315271  820156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/stopped-upgrade-572640/id_rsa Username:docker}
	I1226 22:39:30.413437  820156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1226 22:39:30.436891  820156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1226 22:39:30.459662  820156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1226 22:39:30.483003  820156 provision.go:86] duration metric: configureAuth took 412.435972ms
	I1226 22:39:30.483032  820156 ubuntu.go:193] setting minikube options for container-runtime
	I1226 22:39:30.483245  820156 config.go:182] Loaded profile config "stopped-upgrade-572640": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1226 22:39:30.483367  820156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-572640
	I1226 22:39:30.502238  820156 main.go:141] libmachine: Using SSH client type: native
	I1226 22:39:30.502645  820156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33853 <nil> <nil>}
	I1226 22:39:30.502668  820156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1226 22:39:30.926817  820156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1226 22:39:30.926840  820156 machine.go:91] provisioned docker machine in 4.208043057s
	I1226 22:39:30.926851  820156 start.go:300] post-start starting for "stopped-upgrade-572640" (driver="docker")
	I1226 22:39:30.926861  820156 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1226 22:39:30.926924  820156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1226 22:39:30.926994  820156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-572640
	I1226 22:39:30.947588  820156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/stopped-upgrade-572640/id_rsa Username:docker}
	I1226 22:39:31.052916  820156 ssh_runner.go:195] Run: cat /etc/os-release
	I1226 22:39:31.057983  820156 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1226 22:39:31.058007  820156 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1226 22:39:31.058018  820156 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1226 22:39:31.058025  820156 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1226 22:39:31.058035  820156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/addons for local assets ...
	I1226 22:39:31.058100  820156 filesync.go:126] Scanning /home/jenkins/minikube-integration/17857-697646/.minikube/files for local assets ...
	I1226 22:39:31.058194  820156 filesync.go:149] local asset: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem -> 7030362.pem in /etc/ssl/certs
	I1226 22:39:31.058305  820156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1226 22:39:31.070758  820156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/ssl/certs/7030362.pem --> /etc/ssl/certs/7030362.pem (1708 bytes)
	I1226 22:39:31.098969  820156 start.go:303] post-start completed in 172.101329ms
	I1226 22:39:31.099116  820156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:39:31.099204  820156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-572640
	I1226 22:39:31.122131  820156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/stopped-upgrade-572640/id_rsa Username:docker}
	I1226 22:39:31.227101  820156 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1226 22:39:31.234482  820156 fix.go:56] fixHost completed within 4.910801224s
	I1226 22:39:31.234505  820156 start.go:83] releasing machines lock for "stopped-upgrade-572640", held for 4.910851175s
	I1226 22:39:31.234576  820156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-572640
	I1226 22:39:31.258145  820156 ssh_runner.go:195] Run: cat /version.json
	I1226 22:39:31.258196  820156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-572640
	I1226 22:39:31.258416  820156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1226 22:39:31.258458  820156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-572640
	I1226 22:39:31.298785  820156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/stopped-upgrade-572640/id_rsa Username:docker}
	I1226 22:39:31.311238  820156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33853 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/stopped-upgrade-572640/id_rsa Username:docker}
	W1226 22:39:31.484624  820156 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1226 22:39:31.484702  820156 ssh_runner.go:195] Run: systemctl --version
	I1226 22:39:31.490124  820156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1226 22:39:31.617358  820156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1226 22:39:31.623448  820156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:39:31.653858  820156 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1226 22:39:31.653935  820156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1226 22:39:31.701915  820156 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1226 22:39:31.701935  820156 start.go:475] detecting cgroup driver to use...
	I1226 22:39:31.701966  820156 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1226 22:39:31.702014  820156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1226 22:39:31.762866  820156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1226 22:39:31.778018  820156 docker.go:203] disabling cri-docker service (if available) ...
	I1226 22:39:31.778086  820156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1226 22:39:31.790088  820156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1226 22:39:31.805141  820156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1226 22:39:31.824759  820156 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1226 22:39:31.824838  820156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1226 22:39:31.966152  820156 docker.go:219] disabling docker service ...
	I1226 22:39:31.966222  820156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1226 22:39:31.981860  820156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1226 22:39:31.998031  820156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1226 22:39:32.115515  820156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1226 22:39:32.226554  820156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1226 22:39:32.238807  820156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1226 22:39:32.257707  820156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1226 22:39:32.257768  820156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1226 22:39:32.271276  820156 out.go:177] 
	W1226 22:39:32.273156  820156 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1226 22:39:32.273190  820156 out.go:239] * 
	* 
	W1226 22:39:32.274163  820156 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1226 22:39:32.276373  820156 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-572640 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (93.25s)

                                                
                                    

Test pass (272/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.09
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 10.54
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 10.56
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.39
23 TestDownloadOnly/DeleteAll 0.36
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
26 TestBinaryMirror 0.63
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
32 TestAddons/Setup 174.64
34 TestAddons/parallel/Registry 16.66
36 TestAddons/parallel/InspektorGadget 11.52
37 TestAddons/parallel/MetricsServer 6.05
41 TestAddons/parallel/Headlamp 12.55
42 TestAddons/parallel/CloudSpanner 5.72
43 TestAddons/parallel/LocalPath 9.6
44 TestAddons/parallel/NvidiaDevicePlugin 6.64
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.19
49 TestAddons/StoppedEnableDisable 12.44
50 TestCertOptions 37.42
51 TestCertExpiration 241.94
53 TestForceSystemdFlag 41.4
54 TestForceSystemdEnv 44.87
60 TestErrorSpam/setup 30.59
61 TestErrorSpam/start 0.92
62 TestErrorSpam/status 1.17
63 TestErrorSpam/pause 1.94
64 TestErrorSpam/unpause 2.12
65 TestErrorSpam/stop 1.47
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 75.19
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 41.23
72 TestFunctional/serial/KubeContext 0.06
73 TestFunctional/serial/KubectlGetPods 0.1
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.72
77 TestFunctional/serial/CacheCmd/cache/add_local 1.11
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
79 TestFunctional/serial/CacheCmd/cache/list 0.07
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
81 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
82 TestFunctional/serial/CacheCmd/cache/delete 0.15
83 TestFunctional/serial/MinikubeKubectlCmd 0.15
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
85 TestFunctional/serial/ExtraConfig 38.06
86 TestFunctional/serial/ComponentHealth 0.11
87 TestFunctional/serial/LogsCmd 1.93
88 TestFunctional/serial/LogsFileCmd 1.95
89 TestFunctional/serial/InvalidService 4.26
91 TestFunctional/parallel/ConfigCmd 0.68
92 TestFunctional/parallel/DashboardCmd 41.18
93 TestFunctional/parallel/DryRun 0.54
94 TestFunctional/parallel/InternationalLanguage 0.23
95 TestFunctional/parallel/StatusCmd 1.21
99 TestFunctional/parallel/ServiceCmdConnect 46.69
100 TestFunctional/parallel/AddonsCmd 0.19
103 TestFunctional/parallel/SSHCmd 0.85
104 TestFunctional/parallel/CpCmd 2.69
106 TestFunctional/parallel/FileSync 0.31
107 TestFunctional/parallel/CertSync 2.31
111 TestFunctional/parallel/NodeLabels 0.09
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
115 TestFunctional/parallel/License 0.29
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.76
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
123 TestFunctional/parallel/ServiceCmd/List 0.56
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
126 TestFunctional/parallel/ServiceCmd/Format 0.45
127 TestFunctional/parallel/ServiceCmd/URL 0.45
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
129 TestFunctional/parallel/ProfileCmd/profile_list 0.46
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
131 TestFunctional/parallel/MountCmd/any-port 17.54
132 TestFunctional/parallel/MountCmd/specific-port 2.05
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.18
134 TestFunctional/parallel/Version/short 0.1
135 TestFunctional/parallel/Version/components 1.37
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.93
141 TestFunctional/parallel/ImageCommands/Setup 2.57
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.56
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.95
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.69
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.99
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.28
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
156 TestFunctional/delete_addon-resizer_images 0.09
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 82.45
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.66
169 TestJSONOutput/start/Command 76.85
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.84
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.76
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.9
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.27
194 TestKicCustomNetwork/create_custom_network 46.88
195 TestKicCustomNetwork/use_default_bridge_network 34.32
196 TestKicExistingNetwork 36.1
197 TestKicCustomSubnet 35.49
198 TestKicStaticIP 35.79
199 TestMainNoArgs 0.08
200 TestMinikubeProfile 71.9
203 TestMountStart/serial/StartWithMountFirst 9.91
204 TestMountStart/serial/VerifyMountFirst 0.31
205 TestMountStart/serial/StartWithMountSecond 7.31
206 TestMountStart/serial/VerifyMountSecond 0.29
207 TestMountStart/serial/DeleteFirst 1.69
208 TestMountStart/serial/VerifyMountPostDelete 0.29
209 TestMountStart/serial/Stop 1.25
210 TestMountStart/serial/RestartStopped 7.76
211 TestMountStart/serial/VerifyMountPostStop 0.29
214 TestMultiNode/serial/FreshStart2Nodes 128.86
215 TestMultiNode/serial/DeployApp2Nodes 6.88
217 TestMultiNode/serial/AddNode 47.51
218 TestMultiNode/serial/MultiNodeLabels 0.1
219 TestMultiNode/serial/ProfileList 0.36
220 TestMultiNode/serial/CopyFile 11.37
221 TestMultiNode/serial/StopNode 2.37
222 TestMultiNode/serial/StartAfterStop 12.95
223 TestMultiNode/serial/RestartKeepsNodes 120.59
224 TestMultiNode/serial/DeleteNode 5.27
225 TestMultiNode/serial/StopMultiNode 24
226 TestMultiNode/serial/RestartMultiNode 79.17
227 TestMultiNode/serial/ValidateNameConflict 38.18
232 TestPreload 142.2
234 TestScheduledStopUnix 109.42
237 TestInsufficientStorage 13.63
240 TestKubernetesUpgrade 389.88
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
244 TestNoKubernetes/serial/StartWithK8s 43.61
245 TestNoKubernetes/serial/StartWithStopK8s 19.87
246 TestNoKubernetes/serial/Start 10.17
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
248 TestNoKubernetes/serial/ProfileList 1.16
249 TestNoKubernetes/serial/Stop 1.32
250 TestNoKubernetes/serial/StartNoArgs 8.06
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
252 TestStoppedBinaryUpgrade/Setup 1.14
254 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
263 TestPause/serial/Start 56.32
264 TestPause/serial/SecondStartNoReconfiguration 28.11
265 TestPause/serial/Pause 0.94
266 TestPause/serial/VerifyStatus 0.42
267 TestPause/serial/Unpause 1.16
268 TestPause/serial/PauseAgain 1.88
269 TestPause/serial/DeletePaused 3.47
270 TestPause/serial/VerifyDeletedResources 12.83
278 TestNetworkPlugins/group/false 6.27
283 TestStartStop/group/old-k8s-version/serial/FirstStart 126.5
284 TestStartStop/group/old-k8s-version/serial/DeployApp 9.52
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
286 TestStartStop/group/old-k8s-version/serial/Stop 12.05
287 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.31
288 TestStartStop/group/old-k8s-version/serial/SecondStart 449.89
290 TestStartStop/group/no-preload/serial/FirstStart 67.74
291 TestStartStop/group/no-preload/serial/DeployApp 10.33
292 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
293 TestStartStop/group/no-preload/serial/Stop 12.04
294 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
295 TestStartStop/group/no-preload/serial/SecondStart 621.54
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.17
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
299 TestStartStop/group/old-k8s-version/serial/Pause 4.32
301 TestStartStop/group/embed-certs/serial/FirstStart 79.25
302 TestStartStop/group/embed-certs/serial/DeployApp 11.37
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
304 TestStartStop/group/embed-certs/serial/Stop 12
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
306 TestStartStop/group/embed-certs/serial/SecondStart 624.49
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
310 TestStartStop/group/no-preload/serial/Pause 3.56
312 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.97
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.4
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.27
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.07
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 625.77
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
320 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
321 TestStartStop/group/embed-certs/serial/Pause 5.21
323 TestStartStop/group/newest-cni/serial/FirstStart 46.08
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
326 TestStartStop/group/newest-cni/serial/Stop 1.28
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
328 TestStartStop/group/newest-cni/serial/SecondStart 31.57
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
332 TestStartStop/group/newest-cni/serial/Pause 3.36
333 TestNetworkPlugins/group/auto/Start 78.49
334 TestNetworkPlugins/group/auto/KubeletFlags 0.37
335 TestNetworkPlugins/group/auto/NetCatPod 10.32
336 TestNetworkPlugins/group/auto/DNS 0.2
337 TestNetworkPlugins/group/auto/Localhost 0.17
338 TestNetworkPlugins/group/auto/HairPin 0.17
339 TestNetworkPlugins/group/kindnet/Start 83.15
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.55
344 TestNetworkPlugins/group/calico/Start 81.9
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
347 TestNetworkPlugins/group/kindnet/NetCatPod 12.29
348 TestNetworkPlugins/group/kindnet/DNS 0.27
349 TestNetworkPlugins/group/kindnet/Localhost 0.2
350 TestNetworkPlugins/group/kindnet/HairPin 0.2
351 TestNetworkPlugins/group/custom-flannel/Start 73.17
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/KubeletFlags 0.35
354 TestNetworkPlugins/group/calico/NetCatPod 11.3
355 TestNetworkPlugins/group/calico/DNS 0.35
356 TestNetworkPlugins/group/calico/Localhost 0.29
357 TestNetworkPlugins/group/calico/HairPin 0.34
358 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.44
360 TestNetworkPlugins/group/enable-default-cni/Start 99.29
361 TestNetworkPlugins/group/custom-flannel/DNS 0.25
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
364 TestNetworkPlugins/group/flannel/Start 67.91
365 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
366 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.37
367 TestNetworkPlugins/group/flannel/ControllerPod 6.01
368 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
369 TestNetworkPlugins/group/flannel/NetCatPod 11.3
370 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
371 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
372 TestNetworkPlugins/group/enable-default-cni/HairPin 0.25
373 TestNetworkPlugins/group/flannel/DNS 0.28
374 TestNetworkPlugins/group/flannel/Localhost 0.26
375 TestNetworkPlugins/group/flannel/HairPin 0.27
376 TestNetworkPlugins/group/bridge/Start 86.41
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
378 TestNetworkPlugins/group/bridge/NetCatPod 10.26
379 TestNetworkPlugins/group/bridge/DNS 0.37
380 TestNetworkPlugins/group/bridge/Localhost 0.18
381 TestNetworkPlugins/group/bridge/HairPin 0.23
x
+
TestDownloadOnly/v1.16.0/json-events (14.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-988176 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-988176 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.090203706s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-988176
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-988176: exit status 85 (88.69492ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-988176 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-988176        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:44:39
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:44:39.464482  703041 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:44:39.464788  703041 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:39.464826  703041 out.go:309] Setting ErrFile to fd 2...
	I1226 21:44:39.464853  703041 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:39.465150  703041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	W1226 21:44:39.465351  703041 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17857-697646/.minikube/config/config.json: open /home/jenkins/minikube-integration/17857-697646/.minikube/config/config.json: no such file or directory
	I1226 21:44:39.465929  703041 out.go:303] Setting JSON to true
	I1226 21:44:39.466924  703041 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19613,"bootTime":1703607466,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 21:44:39.467049  703041 start.go:138] virtualization:  
	I1226 21:44:39.470402  703041 out.go:97] [download-only-988176] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 21:44:39.472804  703041 out.go:169] MINIKUBE_LOCATION=17857
	W1226 21:44:39.470783  703041 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball: no such file or directory
	I1226 21:44:39.470842  703041 notify.go:220] Checking for updates...
	I1226 21:44:39.476693  703041 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:44:39.478406  703041 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 21:44:39.480276  703041 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 21:44:39.482154  703041 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1226 21:44:39.486139  703041 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 21:44:39.486440  703041 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 21:44:39.511422  703041 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:44:39.511524  703041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:39.590792  703041 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-26 21:44:39.581094491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:44:39.590896  703041 docker.go:295] overlay module found
	I1226 21:44:39.592999  703041 out.go:97] Using the docker driver based on user configuration
	I1226 21:44:39.593028  703041 start.go:298] selected driver: docker
	I1226 21:44:39.593035  703041 start.go:902] validating driver "docker" against <nil>
	I1226 21:44:39.594114  703041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:39.662294  703041 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-26 21:44:39.652916336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:44:39.662459  703041 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1226 21:44:39.662749  703041 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1226 21:44:39.662937  703041 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1226 21:44:39.665245  703041 out.go:169] Using Docker driver with root privileges
	I1226 21:44:39.667282  703041 cni.go:84] Creating CNI manager for ""
	I1226 21:44:39.667304  703041 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:44:39.667319  703041 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1226 21:44:39.667332  703041 start_flags.go:323] config:
	{Name:download-only-988176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-988176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:44:39.669489  703041 out.go:97] Starting control plane node download-only-988176 in cluster download-only-988176
	I1226 21:44:39.669523  703041 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 21:44:39.671330  703041 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I1226 21:44:39.671358  703041 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1226 21:44:39.671517  703041 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 21:44:39.692855  703041 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:44:39.693047  703041 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 21:44:39.693165  703041 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:44:39.731154  703041 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1226 21:44:39.731179  703041 cache.go:56] Caching tarball of preloaded images
	I1226 21:44:39.731394  703041 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1226 21:44:39.733855  703041 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1226 21:44:39.733890  703041 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1226 21:44:39.845143  703041 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1226 21:44:48.088829  703041 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-988176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (10.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-988176 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-988176 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.544100219s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (10.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-988176
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-988176: exit status 85 (93.748793ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-988176 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-988176        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-988176 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-988176        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:44:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:44:53.643766  703119 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:44:53.643921  703119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:53.643930  703119 out.go:309] Setting ErrFile to fd 2...
	I1226 21:44:53.643936  703119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:44:53.644271  703119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	W1226 21:44:53.644392  703119 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17857-697646/.minikube/config/config.json: open /home/jenkins/minikube-integration/17857-697646/.minikube/config/config.json: no such file or directory
	I1226 21:44:53.644661  703119 out.go:303] Setting JSON to true
	I1226 21:44:53.645489  703119 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19627,"bootTime":1703607466,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 21:44:53.645565  703119 start.go:138] virtualization:  
	I1226 21:44:53.647960  703119 out.go:97] [download-only-988176] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 21:44:53.648102  703119 notify.go:220] Checking for updates...
	I1226 21:44:53.649729  703119 out.go:169] MINIKUBE_LOCATION=17857
	I1226 21:44:53.651616  703119 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:44:53.653320  703119 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 21:44:53.654909  703119 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 21:44:53.656445  703119 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1226 21:44:53.659885  703119 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 21:44:53.660402  703119 config.go:182] Loaded profile config "download-only-988176": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1226 21:44:53.660451  703119 start.go:810] api.Load failed for download-only-988176: filestore "download-only-988176": Docker machine "download-only-988176" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:44:53.660574  703119 driver.go:392] Setting default libvirt URI to qemu:///system
	W1226 21:44:53.660602  703119 start.go:810] api.Load failed for download-only-988176: filestore "download-only-988176": Docker machine "download-only-988176" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:44:53.683640  703119 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:44:53.683751  703119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:53.763902  703119 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-26 21:44:53.752636238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:44:53.764010  703119 docker.go:295] overlay module found
	I1226 21:44:53.765944  703119 out.go:97] Using the docker driver based on existing profile
	I1226 21:44:53.765985  703119 start.go:298] selected driver: docker
	I1226 21:44:53.765992  703119 start.go:902] validating driver "docker" against &{Name:download-only-988176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-988176 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:44:53.766165  703119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:44:53.833059  703119 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-26 21:44:53.82369495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:44:53.833520  703119 cni.go:84] Creating CNI manager for ""
	I1226 21:44:53.833539  703119 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:44:53.833552  703119 start_flags.go:323] config:
	{Name:download-only-988176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-988176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1226 21:44:53.835509  703119 out.go:97] Starting control plane node download-only-988176 in cluster download-only-988176
	I1226 21:44:53.835529  703119 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 21:44:53.837492  703119 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I1226 21:44:53.837520  703119 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:44:53.837677  703119 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 21:44:53.854547  703119 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:44:53.854701  703119 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 21:44:53.854726  703119 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I1226 21:44:53.854734  703119 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I1226 21:44:53.854742  703119 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I1226 21:44:53.904995  703119 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1226 21:44:53.905025  703119 cache.go:56] Caching tarball of preloaded images
	I1226 21:44:53.905195  703119 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:44:53.907441  703119 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1226 21:44:53.907461  703119 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I1226 21:44:54.014219  703119 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1226 21:45:02.477097  703119 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I1226 21:45:02.477212  703119 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I1226 21:45:03.384225  703119 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1226 21:45:03.384391  703119 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/download-only-988176/config.json ...
	I1226 21:45:03.384687  703119 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1226 21:45:03.384976  703119 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-988176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (10.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-988176 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-988176 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.562778597s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (10.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-988176
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-988176: exit status 85 (392.269278ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-988176 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-988176           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-988176 | jenkins | v1.32.0 | 26 Dec 23 21:44 UTC |          |
	|         | -p download-only-988176           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-988176 | jenkins | v1.32.0 | 26 Dec 23 21:45 UTC |          |
	|         | -p download-only-988176           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2023/12/26 21:45:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1226 21:45:04.281858  703194 out.go:296] Setting OutFile to fd 1 ...
	I1226 21:45:04.281995  703194 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:45:04.282004  703194 out.go:309] Setting ErrFile to fd 2...
	I1226 21:45:04.282010  703194 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 21:45:04.282270  703194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	W1226 21:45:04.282416  703194 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17857-697646/.minikube/config/config.json: open /home/jenkins/minikube-integration/17857-697646/.minikube/config/config.json: no such file or directory
	I1226 21:45:04.282646  703194 out.go:303] Setting JSON to true
	I1226 21:45:04.283465  703194 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19638,"bootTime":1703607466,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 21:45:04.283541  703194 start.go:138] virtualization:  
	I1226 21:45:04.286063  703194 out.go:97] [download-only-988176] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 21:45:04.288554  703194 out.go:169] MINIKUBE_LOCATION=17857
	I1226 21:45:04.286368  703194 notify.go:220] Checking for updates...
	I1226 21:45:04.293110  703194 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 21:45:04.295092  703194 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 21:45:04.297069  703194 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 21:45:04.299390  703194 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1226 21:45:04.303597  703194 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1226 21:45:04.304127  703194 config.go:182] Loaded profile config "download-only-988176": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1226 21:45:04.304176  703194 start.go:810] api.Load failed for download-only-988176: filestore "download-only-988176": Docker machine "download-only-988176" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:45:04.304300  703194 driver.go:392] Setting default libvirt URI to qemu:///system
	W1226 21:45:04.304329  703194 start.go:810] api.Load failed for download-only-988176: filestore "download-only-988176": Docker machine "download-only-988176" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1226 21:45:04.328682  703194 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 21:45:04.328797  703194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:45:04.411304  703194 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-26 21:45:04.401590915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:45:04.411403  703194 docker.go:295] overlay module found
	I1226 21:45:04.413629  703194 out.go:97] Using the docker driver based on existing profile
	I1226 21:45:04.413655  703194 start.go:298] selected driver: docker
	I1226 21:45:04.413661  703194 start.go:902] validating driver "docker" against &{Name:download-only-988176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-988176 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 21:45:04.413836  703194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 21:45:04.482320  703194 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-26 21:45:04.471917599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 21:45:04.482843  703194 cni.go:84] Creating CNI manager for ""
	I1226 21:45:04.482863  703194 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1226 21:45:04.482874  703194 start_flags.go:323] config:
	{Name:download-only-988176 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-988176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1226 21:45:04.485252  703194 out.go:97] Starting control plane node download-only-988176 in cluster download-only-988176
	I1226 21:45:04.485280  703194 cache.go:121] Beginning downloading kic base image for docker with crio
	I1226 21:45:04.487318  703194 out.go:97] Pulling base image v0.0.42-1703498848-17857 ...
	I1226 21:45:04.487348  703194 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1226 21:45:04.487527  703194 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local docker daemon
	I1226 21:45:04.504608  703194 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c to local cache
	I1226 21:45:04.504834  703194 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory
	I1226 21:45:04.504861  703194 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c in local cache directory, skipping pull
	I1226 21:45:04.504872  703194 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c exists in cache, skipping pull
	I1226 21:45:04.504880  703194 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c as a tarball
	I1226 21:45:04.553130  703194 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I1226 21:45:04.553155  703194 cache.go:56] Caching tarball of preloaded images
	I1226 21:45:04.553318  703194 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1226 21:45:04.555757  703194 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1226 21:45:04.555784  703194 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I1226 21:45:04.664959  703194 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:307124b87428587d9288b24ec2db2592 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I1226 21:45:10.862651  703194 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I1226 21:45:10.862761  703194 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17857-697646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I1226 21:45:11.726025  703194 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I1226 21:45:11.726160  703194 profile.go:148] Saving config to /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/download-only-988176/config.json ...
	I1226 21:45:11.726383  703194 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I1226 21:45:11.726583  703194 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17857-697646/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-988176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.39s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-988176
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-438777 --alsologtostderr --binary-mirror http://127.0.0.1:45525 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-438777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-438777
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-154736
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-154736: exit status 85 (91.17734ms)

                                                
                                                
-- stdout --
	* Profile "addons-154736" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-154736"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-154736
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-154736: exit status 85 (90.736858ms)

                                                
                                                
-- stdout --
	* Profile "addons-154736" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-154736"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (174.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-154736 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-154736 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m54.638535104s)
--- PASS: TestAddons/Setup (174.64s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 46.267689ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-g2w98" [21fa161c-0f99-4fb5-9573-259bd78d21a5] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.013592673s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-h7qrg" [274f34a4-99a0-4df2-8e40-73229ad88336] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005703577s
addons_test.go:340: (dbg) Run:  kubectl --context addons-154736 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-154736 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-154736 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.408206736s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-154736 ip
2023/12/26 21:48:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-154736 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.52s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-m4wr4" [16d89927-1ecc-4d35-befe-af35ceab1f18] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006927474s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-154736
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-154736: (6.508655199s)
--- PASS: TestAddons/parallel/InspektorGadget (11.52s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.05s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.088623ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-pz8ht" [ff2fdb32-af66-480d-ad25-175b65c5b1d4] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004867962s
addons_test.go:415: (dbg) Run:  kubectl --context addons-154736 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-154736 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-154736 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-154736 --alsologtostderr -v=1: (1.540797534s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-qntlc" [716aab8f-3f84-4751-ab02-0d1524c3eaea] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-qntlc" [716aab8f-3f84-4751-ab02-0d1524c3eaea] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-qntlc" [716aab8f-3f84-4751-ab02-0d1524c3eaea] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-qntlc" [716aab8f-3f84-4751-ab02-0d1524c3eaea] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003590808s
--- PASS: TestAddons/parallel/Headlamp (12.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-gkggw" [b2320a9f-f837-4d2e-9c13-7aecf7b52b83] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010101731s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-154736
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-154736 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-154736 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-154736 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2c5a273d-a096-4b37-b27e-32725fcf690d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2c5a273d-a096-4b37-b27e-32725fcf690d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2c5a273d-a096-4b37-b27e-32725fcf690d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003916694s
addons_test.go:891: (dbg) Run:  kubectl --context addons-154736 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-154736 ssh "cat /opt/local-path-provisioner/pvc-e94447a0-cc9f-4ee2-b024-1e95c001aae0_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-154736 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-154736 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-154736 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9xfxt" [74fad637-1854-48ce-b606-8a09c28e7cfe] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004510768s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-154736
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.64s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-5ggjq" [393631ce-7c86-4d48-8c5d-18fa9bc6681c] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006171772s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-154736 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-154736 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-154736
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-154736: (12.109061691s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-154736
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-154736
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-154736
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (37.42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-022037 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1226 22:44:19.676009  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:44:26.122088  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-022037 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.538624764s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-022037 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-022037 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-022037 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-022037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-022037
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-022037: (2.049631326s)
--- PASS: TestCertOptions (37.42s)

                                                
                                    
x
+
TestCertExpiration (241.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-721140 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-721140 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (36.895704798s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-721140 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-721140 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.514381927s)
helpers_test.go:175: Cleaning up "cert-expiration-721140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-721140
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-721140: (3.533141691s)
--- PASS: TestCertExpiration (241.94s)

                                                
                                    
x
+
TestForceSystemdFlag (41.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-109501 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-109501 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.495105254s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-109501 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-109501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-109501
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-109501: (3.515894176s)
--- PASS: TestForceSystemdFlag (41.40s)

                                                
                                    
x
+
TestForceSystemdEnv (44.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-078376 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1226 22:43:11.964694  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-078376 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.193830836s)
helpers_test.go:175: Cleaning up "force-systemd-env-078376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-078376
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-078376: (2.672852359s)
--- PASS: TestForceSystemdEnv (44.87s)

                                                
                                    
x
+
TestErrorSpam/setup (30.59s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-279825 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-279825 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-279825 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-279825 --driver=docker  --container-runtime=crio: (30.59043513s)
--- PASS: TestErrorSpam/setup (30.59s)

                                                
                                    
x
+
TestErrorSpam/start (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 start --dry-run
E1226 21:58:11.964628  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 21:58:11.971566  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 21:58:11.981800  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 21:58:12.002053  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 21:58:12.042366  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 21:58:12.122677  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
--- PASS: TestErrorSpam/start (0.92s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 status
E1226 21:58:12.282920  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 status
E1226 21:58:12.603178  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 status
E1226 21:58:13.244168  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 pause
E1226 21:58:14.524940  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 unpause
E1226 21:58:17.085149  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
--- PASS: TestErrorSpam/unpause (2.12s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 stop: (1.235870027s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-279825 --log_dir /tmp/nospam-279825 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17857-697646/.minikube/files/etc/test/nested/copy/703036/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262391 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1226 21:58:32.445590  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 21:58:52.925913  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 21:59:33.886146  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-262391 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.187993458s)
--- PASS: TestFunctional/serial/StartWithProxy (75.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262391 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-262391 --alsologtostderr -v=8: (41.218910612s)
functional_test.go:659: soft start took 41.225054947s for "functional-262391" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-262391 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 cache add registry.k8s.io/pause:3.1: (1.314470061s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 cache add registry.k8s.io/pause:3.3: (1.26218902s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 cache add registry.k8s.io/pause:latest: (1.143238017s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-262391 /tmp/TestFunctionalserialCacheCmdcacheadd_local3427949827/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 cache add minikube-local-cache-test:functional-262391
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 cache delete minikube-local-cache-test:functional-262391
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-262391
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (370.195588ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 cache reload: (1.115159604s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 kubectl -- --context functional-262391 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-262391 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262391 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1226 22:00:55.808336  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-262391 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.062950834s)
functional_test.go:757: restart took 38.063076017s for "functional-262391" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-262391 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 logs: (1.933231606s)
--- PASS: TestFunctional/serial/LogsCmd (1.93s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 logs --file /tmp/TestFunctionalserialLogsFileCmd3398630386/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 logs --file /tmp/TestFunctionalserialLogsFileCmd3398630386/001/logs.txt: (1.952059662s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-262391 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-262391
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-262391: exit status 115 (678.272341ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30434 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-262391 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 config get cpus: exit status 14 (113.575462ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 config get cpus: exit status 14 (105.665284ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (41.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-262391 --alsologtostderr -v=1]
2023/12/26 22:06:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-262391 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 728577: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (41.18s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262391 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-262391 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (231.224549ms)

                                                
                                                
-- stdout --
	* [functional-262391] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:05:48.490890  728352 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:05:48.491023  728352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:05:48.491033  728352 out.go:309] Setting ErrFile to fd 2...
	I1226 22:05:48.491039  728352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:05:48.491299  728352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:05:48.491678  728352 out.go:303] Setting JSON to false
	I1226 22:05:48.492685  728352 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20882,"bootTime":1703607466,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:05:48.492757  728352 start.go:138] virtualization:  
	I1226 22:05:48.495446  728352 out.go:177] * [functional-262391] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 22:05:48.498224  728352 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:05:48.498371  728352 notify.go:220] Checking for updates...
	I1226 22:05:48.502860  728352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:05:48.505213  728352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:05:48.507477  728352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:05:48.509433  728352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:05:48.511610  728352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:05:48.514252  728352 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:05:48.514892  728352 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:05:48.540961  728352 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:05:48.541072  728352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:05:48.634166  728352 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-26 22:05:48.622670206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:05:48.634274  728352 docker.go:295] overlay module found
	I1226 22:05:48.636639  728352 out.go:177] * Using the docker driver based on existing profile
	I1226 22:05:48.638696  728352 start.go:298] selected driver: docker
	I1226 22:05:48.638719  728352 start.go:902] validating driver "docker" against &{Name:functional-262391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-262391 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:05:48.638854  728352 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:05:48.641575  728352 out.go:177] 
	W1226 22:05:48.643628  728352 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1226 22:05:48.645913  728352 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262391 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-262391 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-262391 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (230.224432ms)

                                                
                                                
-- stdout --
	* [functional-262391] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:05:48.269730  728312 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:05:48.269944  728312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:05:48.269968  728312 out.go:309] Setting ErrFile to fd 2...
	I1226 22:05:48.269989  728312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:05:48.270943  728312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:05:48.271437  728312 out.go:303] Setting JSON to false
	I1226 22:05:48.272357  728312 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20882,"bootTime":1703607466,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:05:48.272455  728312 start.go:138] virtualization:  
	I1226 22:05:48.275971  728312 out.go:177] * [functional-262391] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1226 22:05:48.278565  728312 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:05:48.280647  728312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:05:48.278730  728312 notify.go:220] Checking for updates...
	I1226 22:05:48.282516  728312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:05:48.284471  728312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:05:48.286259  728312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:05:48.287890  728312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:05:48.290619  728312 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:05:48.291299  728312 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:05:48.316369  728312 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:05:48.316479  728312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:05:48.402896  728312 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-26 22:05:48.392955087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:05:48.403006  728312 docker.go:295] overlay module found
	I1226 22:05:48.406545  728312 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1226 22:05:48.408708  728312 start.go:298] selected driver: docker
	I1226 22:05:48.408728  728312 start.go:902] validating driver "docker" against &{Name:functional-262391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-262391 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1226 22:05:48.408861  728312 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:05:48.411268  728312 out.go:177] 
	W1226 22:05:48.413297  728312 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1226 22:05:48.415411  728312 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (46.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-262391 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-262391 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-nnmwl" [94a306c6-14b7-4f5a-a2e1-3aa010134d4e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-nnmwl" [94a306c6-14b7-4f5a-a2e1-3aa010134d4e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 46.00445429s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31053
functional_test.go:1674: http://192.168.49.2:31053: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-nnmwl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31053
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (46.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh -n functional-262391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 cp functional-262391:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd963562061/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh -n functional-262391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh -n functional-262391 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/703036/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo cat /etc/test/nested/copy/703036/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/703036.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo cat /etc/ssl/certs/703036.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/703036.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo cat /usr/share/ca-certificates/703036.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/7030362.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo cat /etc/ssl/certs/7030362.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/7030362.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo cat /usr/share/ca-certificates/7030362.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-262391 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 ssh "sudo systemctl is-active docker": exit status 1 (300.030139ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 ssh "sudo systemctl is-active containerd": exit status 1 (325.157226ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262391 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262391 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-262391 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-262391 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 724946: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-262391 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-262391 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-262391 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-v9j24" [36ebaa4c-9634-4934-bffe-f670c826d979] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-v9j24" [36ebaa4c-9634-4934-bffe-f670c826d979] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004247188s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 service list -o json
functional_test.go:1493: Took "572.146157ms" to run "out/minikube-linux-arm64 -p functional-262391 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32258
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32258
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "385.128675ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "73.133672ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "380.159271ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "74.951903ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdany-port2090036595/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1703628325220906594" to /tmp/TestFunctionalparallelMountCmdany-port2090036595/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1703628325220906594" to /tmp/TestFunctionalparallelMountCmdany-port2090036595/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1703628325220906594" to /tmp/TestFunctionalparallelMountCmdany-port2090036595/001/test-1703628325220906594
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (424.373378ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 26 22:05 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 26 22:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 26 22:05 test-1703628325220906594
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh cat /mount-9p/test-1703628325220906594
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-262391 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fcf172be-5072-4ef0-87f2-d807a4e97eef] Pending
helpers_test.go:344: "busybox-mount" [fcf172be-5072-4ef0-87f2-d807a4e97eef] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fcf172be-5072-4ef0-87f2-d807a4e97eef] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fcf172be-5072-4ef0-87f2-d807a4e97eef] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.003933973s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-262391 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdany-port2090036595/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdspecific-port2869353939/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (395.72985ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdspecific-port2869353939/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 ssh "sudo umount -f /mount-9p": exit status 1 (338.600438ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-262391 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdspecific-port2869353939/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4097596897/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4097596897/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4097596897/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T" /mount1: exit status 1 (774.91567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-262391 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4097596897/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4097596897/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-262391 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4097596897/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 version -o=json --components: (1.371096543s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262391 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-262391
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262391 image ls --format short --alsologtostderr:
I1226 22:06:53.313930  730019 out.go:296] Setting OutFile to fd 1 ...
I1226 22:06:53.314134  730019 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:53.314156  730019 out.go:309] Setting ErrFile to fd 2...
I1226 22:06:53.314177  730019 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:53.314471  730019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
I1226 22:06:53.315140  730019 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:53.315292  730019 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:53.317116  730019 cli_runner.go:164] Run: docker container inspect functional-262391 --format={{.State.Status}}
I1226 22:06:53.335573  730019 ssh_runner.go:195] Run: systemctl --version
I1226 22:06:53.335647  730019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
I1226 22:06:53.356076  730019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
I1226 22:06:53.470690  730019 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262391 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/google-containers/addon-resizer  | functional-262391  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262391 image ls --format table --alsologtostderr:
I1226 22:06:54.184461  730176 out.go:296] Setting OutFile to fd 1 ...
I1226 22:06:54.188611  730176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:54.188847  730176 out.go:309] Setting ErrFile to fd 2...
I1226 22:06:54.188877  730176 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:54.189299  730176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
I1226 22:06:54.190166  730176 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:54.190363  730176 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:54.191083  730176 cli_runner.go:164] Run: docker container inspect functional-262391 --format={{.State.Status}}
I1226 22:06:54.215515  730176 ssh_runner.go:195] Run: systemctl --version
I1226 22:06:54.215655  730176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
I1226 22:06:54.242921  730176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
I1226 22:06:54.351696  730176 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262391 image ls --format json --alsologtostderr:
[{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"3ca3ca488cf13fde14cf
c4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/paus
e@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d
240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11
e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-c
ontainers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-262391"],"size":"34114467"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262391 image ls --format json --alsologtostderr:
I1226 22:06:53.845863  730115 out.go:296] Setting OutFile to fd 1 ...
I1226 22:06:53.846076  730115 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:53.846085  730115 out.go:309] Setting ErrFile to fd 2...
I1226 22:06:53.846092  730115 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:53.846350  730115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
I1226 22:06:53.847020  730115 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:53.847166  730115 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:53.847760  730115 cli_runner.go:164] Run: docker container inspect functional-262391 --format={{.State.Status}}
I1226 22:06:53.869211  730115 ssh_runner.go:195] Run: systemctl --version
I1226 22:06:53.869269  730115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
I1226 22:06:53.888550  730115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
I1226 22:06:53.987202  730115 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262391 image ls --format yaml --alsologtostderr:
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-262391
size: "34114467"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262391 image ls --format yaml --alsologtostderr:
I1226 22:06:53.465178  730052 out.go:296] Setting OutFile to fd 1 ...
I1226 22:06:53.465303  730052 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:53.465339  730052 out.go:309] Setting ErrFile to fd 2...
I1226 22:06:53.465348  730052 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:53.465739  730052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
I1226 22:06:53.466422  730052 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:53.466557  730052 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:53.467649  730052 cli_runner.go:164] Run: docker container inspect functional-262391 --format={{.State.Status}}
I1226 22:06:53.491868  730052 ssh_runner.go:195] Run: systemctl --version
I1226 22:06:53.491922  730052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
I1226 22:06:53.514423  730052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
I1226 22:06:53.642600  730052 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-262391 ssh pgrep buildkitd: exit status 1 (399.427659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image build -t localhost/my-image:functional-262391 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 image build -t localhost/my-image:functional-262391 testdata/build --alsologtostderr: (2.252264915s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-262391 image build -t localhost/my-image:functional-262391 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 81e3c50bc49
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-262391
--> 8504754b0ff
Successfully tagged localhost/my-image:functional-262391
8504754b0ffa4bc998f1a75ac53ff8f90bd0f6a2796ecc4a508167facc0a9319
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-262391 image build -t localhost/my-image:functional-262391 testdata/build --alsologtostderr:
I1226 22:06:54.026703  730155 out.go:296] Setting OutFile to fd 1 ...
I1226 22:06:54.027310  730155 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:54.027329  730155 out.go:309] Setting ErrFile to fd 2...
I1226 22:06:54.027337  730155 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1226 22:06:54.027641  730155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
I1226 22:06:54.028409  730155 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:54.030698  730155 config.go:182] Loaded profile config "functional-262391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1226 22:06:54.031397  730155 cli_runner.go:164] Run: docker container inspect functional-262391 --format={{.State.Status}}
I1226 22:06:54.066794  730155 ssh_runner.go:195] Run: systemctl --version
I1226 22:06:54.066858  730155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-262391
I1226 22:06:54.096013  730155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33681 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/functional-262391/id_rsa Username:docker}
I1226 22:06:54.198948  730155 build_images.go:151] Building image from path: /tmp/build.1016422787.tar
I1226 22:06:54.199017  730155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1226 22:06:54.215092  730155 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1016422787.tar
I1226 22:06:54.220891  730155 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1016422787.tar: stat -c "%s %y" /var/lib/minikube/build/build.1016422787.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1016422787.tar': No such file or directory
I1226 22:06:54.220924  730155 ssh_runner.go:362] scp /tmp/build.1016422787.tar --> /var/lib/minikube/build/build.1016422787.tar (3072 bytes)
I1226 22:06:54.255717  730155 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1016422787
I1226 22:06:54.268002  730155 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1016422787 -xf /var/lib/minikube/build/build.1016422787.tar
I1226 22:06:54.291793  730155 crio.go:297] Building image: /var/lib/minikube/build/build.1016422787
I1226 22:06:54.291865  730155 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-262391 /var/lib/minikube/build/build.1016422787 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1226 22:06:56.153654  730155 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-262391 /var/lib/minikube/build/build.1016422787 --cgroup-manager=cgroupfs: (1.861755977s)
I1226 22:06:56.153730  730155 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1016422787
I1226 22:06:56.165016  730155 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1016422787.tar
I1226 22:06:56.175764  730155 build_images.go:207] Built localhost/my-image:functional-262391 from /tmp/build.1016422787.tar
I1226 22:06:56.175793  730155 build_images.go:123] succeeded building to: functional-262391
I1226 22:06:56.175806  730155 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.543937875s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-262391
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image load --daemon gcr.io/google-containers/addon-resizer:functional-262391 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 image load --daemon gcr.io/google-containers/addon-resizer:functional-262391 --alsologtostderr: (4.300688183s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image load --daemon gcr.io/google-containers/addon-resizer:functional-262391 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 image load --daemon gcr.io/google-containers/addon-resizer:functional-262391 --alsologtostderr: (2.692900476s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.7409254s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-262391
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image load --daemon gcr.io/google-containers/addon-resizer:functional-262391 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 image load --daemon gcr.io/google-containers/addon-resizer:functional-262391 --alsologtostderr: (3.677917811s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image save gcr.io/google-containers/addon-resizer:functional-262391 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image rm gcr.io/google-containers/addon-resizer:functional-262391 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-262391 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.005322388s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-262391
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 image save --daemon gcr.io/google-containers/addon-resizer:functional-262391 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-262391
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-262391 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-262391 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-262391
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-262391
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-262391
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (82.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-324559 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1226 22:08:11.961510  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-324559 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m22.452725887s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (82.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-324559 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-376919 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1226 22:16:16.631672  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:16:44.315207  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-376919 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m16.845666054s)
--- PASS: TestJSONOutput/start/Command (76.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-376919 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-376919 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-376919 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-376919 --output=json --user=testUser: (5.903260398s)
--- PASS: TestJSONOutput/stop/Command (5.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-181253 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-181253 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (104.615359ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"82ed615b-174b-4422-806f-a744dfa23d0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-181253] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2db03798-06af-4d1b-9124-90ada4edc059","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17857"}}
	{"specversion":"1.0","id":"55317918-bd32-4e26-b1cd-08e8124ce52c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"589c365b-98fb-4797-af3c-709d8778764a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig"}}
	{"specversion":"1.0","id":"12c08889-ab0d-43c2-bdf5-ae9b301dfca5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube"}}
	{"specversion":"1.0","id":"ec616064-0f2b-404a-a612-1d9949a02a52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f73303b5-f1dd-4171-a697-09607e581920","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"23961bf9-4d35-4de4-b649-2bc19b23de3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-181253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-181253
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (46.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-382681 --network=
E1226 22:18:11.962121  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-382681 --network=: (44.780736182s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-382681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-382681
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-382681: (2.075691108s)
--- PASS: TestKicCustomNetwork/create_custom_network (46.88s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-939728 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-939728 --network=bridge: (32.322393408s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-939728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-939728
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-939728: (1.97068854s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.32s)

                                                
                                    
x
+
TestKicExistingNetwork (36.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-310422 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-310422 --network=existing-network: (33.881582304s)
helpers_test.go:175: Cleaning up "existing-network-310422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-310422
E1226 22:19:26.121563  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:26.126839  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:26.137099  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:26.157386  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:26.197682  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:26.278105  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:26.438586  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:26.759287  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:27.400221  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-310422: (2.048799498s)
--- PASS: TestKicExistingNetwork (36.10s)

                                                
                                    
x
+
TestKicCustomSubnet (35.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-456164 --subnet=192.168.60.0/24
E1226 22:19:28.681377  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:31.241593  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:36.362764  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:19:46.603686  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-456164 --subnet=192.168.60.0/24: (33.298820384s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-456164 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-456164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-456164
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-456164: (2.170096689s)
--- PASS: TestKicCustomSubnet (35.49s)

                                                
                                    
x
+
TestKicStaticIP (35.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-935595 --static-ip=192.168.200.200
E1226 22:20:07.083929  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-935595 --static-ip=192.168.200.200: (33.532883592s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-935595 ip
helpers_test.go:175: Cleaning up "static-ip-935595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-935595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-935595: (2.080243828s)
--- PASS: TestKicStaticIP (35.79s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (71.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-463037 --driver=docker  --container-runtime=crio
E1226 22:20:48.044191  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-463037 --driver=docker  --container-runtime=crio: (33.757770142s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-465843 --driver=docker  --container-runtime=crio
E1226 22:21:16.630917  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-465843 --driver=docker  --container-runtime=crio: (32.394996923s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-463037
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-465843
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-465843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-465843
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-465843: (2.058452771s)
helpers_test.go:175: Cleaning up "first-463037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-463037
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-463037: (2.3097564s)
--- PASS: TestMinikubeProfile (71.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-495493 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-495493 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.911681545s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-495493 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-497716 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-497716 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.309777473s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-497716 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-495493 --alsologtostderr -v=5
E1226 22:22:09.965310  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-495493 --alsologtostderr -v=5: (1.685775094s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-497716 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-497716
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-497716: (1.247597615s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-497716
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-497716: (6.7622531s)
--- PASS: TestMountStart/serial/RestartStopped (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-497716 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-772557 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1226 22:23:11.961567  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 22:24:26.121184  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-772557 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m8.267126873s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-772557 -- rollout status deployment/busybox: (4.737592959s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-ls5rz -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-sffk7 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-ls5rz -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-sffk7 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-ls5rz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-772557 -- exec busybox-5bc68d56bd-sffk7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-772557 -v 3 --alsologtostderr
E1226 22:24:53.806324  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-772557 -v 3 --alsologtostderr: (46.75740414s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.51s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-772557 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp testdata/cp-test.txt multinode-772557:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp multinode-772557:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile568634055/001/cp-test_multinode-772557.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp multinode-772557:/home/docker/cp-test.txt multinode-772557-m02:/home/docker/cp-test_multinode-772557_multinode-772557-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m02 "sudo cat /home/docker/cp-test_multinode-772557_multinode-772557-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp multinode-772557:/home/docker/cp-test.txt multinode-772557-m03:/home/docker/cp-test_multinode-772557_multinode-772557-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m03 "sudo cat /home/docker/cp-test_multinode-772557_multinode-772557-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp testdata/cp-test.txt multinode-772557-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp multinode-772557-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile568634055/001/cp-test_multinode-772557-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp multinode-772557-m02:/home/docker/cp-test.txt multinode-772557:/home/docker/cp-test_multinode-772557-m02_multinode-772557.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557 "sudo cat /home/docker/cp-test_multinode-772557-m02_multinode-772557.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp multinode-772557-m02:/home/docker/cp-test.txt multinode-772557-m03:/home/docker/cp-test_multinode-772557-m02_multinode-772557-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m03 "sudo cat /home/docker/cp-test_multinode-772557-m02_multinode-772557-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp testdata/cp-test.txt multinode-772557-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp multinode-772557-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile568634055/001/cp-test_multinode-772557-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp multinode-772557-m03:/home/docker/cp-test.txt multinode-772557:/home/docker/cp-test_multinode-772557-m03_multinode-772557.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557 "sudo cat /home/docker/cp-test_multinode-772557-m03_multinode-772557.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 cp multinode-772557-m03:/home/docker/cp-test.txt multinode-772557-m02:/home/docker/cp-test_multinode-772557-m03_multinode-772557-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 ssh -n multinode-772557-m02 "sudo cat /home/docker/cp-test_multinode-772557-m03_multinode-772557-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-772557 node stop m03: (1.244113213s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-772557 status: exit status 7 (549.463519ms)

                                                
                                                
-- stdout --
	multinode-772557
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-772557-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-772557-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-772557 status --alsologtostderr: exit status 7 (572.84904ms)

                                                
                                                
-- stdout --
	multinode-772557
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-772557-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-772557-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:25:43.158650  775751 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:25:43.158780  775751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:25:43.158789  775751 out.go:309] Setting ErrFile to fd 2...
	I1226 22:25:43.158795  775751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:25:43.159054  775751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:25:43.159231  775751 out.go:303] Setting JSON to false
	I1226 22:25:43.159296  775751 mustload.go:65] Loading cluster: multinode-772557
	I1226 22:25:43.159376  775751 notify.go:220] Checking for updates...
	I1226 22:25:43.159815  775751 config.go:182] Loaded profile config "multinode-772557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:25:43.159833  775751 status.go:255] checking status of multinode-772557 ...
	I1226 22:25:43.160349  775751 cli_runner.go:164] Run: docker container inspect multinode-772557 --format={{.State.Status}}
	I1226 22:25:43.181226  775751 status.go:330] multinode-772557 host status = "Running" (err=<nil>)
	I1226 22:25:43.181247  775751 host.go:66] Checking if "multinode-772557" exists ...
	I1226 22:25:43.181633  775751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-772557
	I1226 22:25:43.200227  775751 host.go:66] Checking if "multinode-772557" exists ...
	I1226 22:25:43.200647  775751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:25:43.200695  775751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557
	I1226 22:25:43.232694  775751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33746 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557/id_rsa Username:docker}
	I1226 22:25:43.333312  775751 ssh_runner.go:195] Run: systemctl --version
	I1226 22:25:43.338934  775751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:25:43.353230  775751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:25:43.422844  775751 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-26 22:25:43.413211341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:25:43.423453  775751 kubeconfig.go:92] found "multinode-772557" server: "https://192.168.58.2:8443"
	I1226 22:25:43.423494  775751 api_server.go:166] Checking apiserver status ...
	I1226 22:25:43.423539  775751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1226 22:25:43.436608  775751 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1275/cgroup
	I1226 22:25:43.448347  775751 api_server.go:182] apiserver freezer: "4:freezer:/docker/ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430/crio/crio-f199c048ed0c8c4d2c15587ddaeebf7229fcbf6c780ee570d6b1f59ef7fcdc20"
	I1226 22:25:43.448421  775751 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ed1900d23c88a4acb8feaeb89fccc502b26fd99f3f09b7aaef22ccd1d6bfc430/crio/crio-f199c048ed0c8c4d2c15587ddaeebf7229fcbf6c780ee570d6b1f59ef7fcdc20/freezer.state
	I1226 22:25:43.458818  775751 api_server.go:204] freezer state: "THAWED"
	I1226 22:25:43.458847  775751 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1226 22:25:43.468760  775751 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1226 22:25:43.468793  775751 status.go:421] multinode-772557 apiserver status = Running (err=<nil>)
	I1226 22:25:43.468804  775751 status.go:257] multinode-772557 status: &{Name:multinode-772557 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1226 22:25:43.468822  775751 status.go:255] checking status of multinode-772557-m02 ...
	I1226 22:25:43.469164  775751 cli_runner.go:164] Run: docker container inspect multinode-772557-m02 --format={{.State.Status}}
	I1226 22:25:43.486747  775751 status.go:330] multinode-772557-m02 host status = "Running" (err=<nil>)
	I1226 22:25:43.486772  775751 host.go:66] Checking if "multinode-772557-m02" exists ...
	I1226 22:25:43.487090  775751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-772557-m02
	I1226 22:25:43.506425  775751 host.go:66] Checking if "multinode-772557-m02" exists ...
	I1226 22:25:43.506758  775751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1226 22:25:43.506803  775751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-772557-m02
	I1226 22:25:43.527856  775751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33751 SSHKeyPath:/home/jenkins/minikube-integration/17857-697646/.minikube/machines/multinode-772557-m02/id_rsa Username:docker}
	I1226 22:25:43.627169  775751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1226 22:25:43.641026  775751 status.go:257] multinode-772557-m02 status: &{Name:multinode-772557-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1226 22:25:43.641061  775751 status.go:255] checking status of multinode-772557-m03 ...
	I1226 22:25:43.641369  775751 cli_runner.go:164] Run: docker container inspect multinode-772557-m03 --format={{.State.Status}}
	I1226 22:25:43.660103  775751 status.go:330] multinode-772557-m03 host status = "Stopped" (err=<nil>)
	I1226 22:25:43.660124  775751 status.go:343] host is not running, skipping remaining checks
	I1226 22:25:43.660132  775751 status.go:257] multinode-772557-m03 status: &{Name:multinode-772557-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-772557 node start m03 --alsologtostderr: (12.102296787s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-772557
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-772557
E1226 22:26:16.630736  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-772557: (24.938558546s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-772557 --wait=true -v=8 --alsologtostderr
E1226 22:27:39.675802  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-772557 --wait=true -v=8 --alsologtostderr: (1m35.481323629s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-772557
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-772557 node delete m03: (4.492063535s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 stop
E1226 22:28:11.962367  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-772557 stop: (23.784512498s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-772557 status: exit status 7 (104.493286ms)

                                                
                                                
-- stdout --
	multinode-772557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-772557-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-772557 status --alsologtostderr: exit status 7 (109.495435ms)

                                                
                                                
-- stdout --
	multinode-772557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-772557-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:28:26.421509  783962 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:28:26.421759  783962 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:28:26.421785  783962 out.go:309] Setting ErrFile to fd 2...
	I1226 22:28:26.421806  783962 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:28:26.422098  783962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:28:26.422314  783962 out.go:303] Setting JSON to false
	I1226 22:28:26.422429  783962 mustload.go:65] Loading cluster: multinode-772557
	I1226 22:28:26.422465  783962 notify.go:220] Checking for updates...
	I1226 22:28:26.422915  783962 config.go:182] Loaded profile config "multinode-772557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:28:26.422951  783962 status.go:255] checking status of multinode-772557 ...
	I1226 22:28:26.424100  783962 cli_runner.go:164] Run: docker container inspect multinode-772557 --format={{.State.Status}}
	I1226 22:28:26.442553  783962 status.go:330] multinode-772557 host status = "Stopped" (err=<nil>)
	I1226 22:28:26.442574  783962 status.go:343] host is not running, skipping remaining checks
	I1226 22:28:26.442581  783962 status.go:257] multinode-772557 status: &{Name:multinode-772557 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1226 22:28:26.442619  783962 status.go:255] checking status of multinode-772557-m02 ...
	I1226 22:28:26.442915  783962 cli_runner.go:164] Run: docker container inspect multinode-772557-m02 --format={{.State.Status}}
	I1226 22:28:26.461008  783962 status.go:330] multinode-772557-m02 host status = "Stopped" (err=<nil>)
	I1226 22:28:26.461029  783962 status.go:343] host is not running, skipping remaining checks
	I1226 22:28:26.461036  783962 status.go:257] multinode-772557-m02 status: &{Name:multinode-772557-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-772557 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1226 22:29:26.121494  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-772557 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.260289995s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-772557 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-772557
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-772557-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-772557-m02 --driver=docker  --container-runtime=crio: exit status 14 (119.029813ms)

                                                
                                                
-- stdout --
	* [multinode-772557-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-772557-m02' is duplicated with machine name 'multinode-772557-m02' in profile 'multinode-772557'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-772557-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-772557-m03 --driver=docker  --container-runtime=crio: (35.546925031s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-772557
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-772557: exit status 80 (358.87741ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-772557
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-772557-m03 already exists in multinode-772557-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-772557-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-772557-m03: (2.090269148s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.18s)

                                                
                                    
x
+
TestPreload (142.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-084972 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1226 22:31:15.009534  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 22:31:16.631659  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-084972 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m22.683989s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-084972 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-084972 image pull gcr.io/k8s-minikube/busybox: (1.82336655s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-084972
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-084972: (5.983107647s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-084972 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-084972 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.085006689s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-084972 image list
helpers_test.go:175: Cleaning up "test-preload-084972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-084972
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-084972: (2.345777873s)
--- PASS: TestPreload (142.20s)

                                                
                                    
x
+
TestScheduledStopUnix (109.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-318679 --memory=2048 --driver=docker  --container-runtime=crio
E1226 22:33:11.961726  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-318679 --memory=2048 --driver=docker  --container-runtime=crio: (32.7735438s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-318679 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-318679 -n scheduled-stop-318679
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-318679 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-318679 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-318679 -n scheduled-stop-318679
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-318679
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-318679 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1226 22:34:26.121295  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-318679
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-318679: exit status 7 (87.594634ms)

                                                
                                                
-- stdout --
	scheduled-stop-318679
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-318679 -n scheduled-stop-318679
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-318679 -n scheduled-stop-318679: exit status 7 (83.719155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-318679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-318679
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-318679: (4.827988882s)
--- PASS: TestScheduledStopUnix (109.42s)

                                                
                                    
x
+
TestInsufficientStorage (13.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-919982 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-919982 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.953634712s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ec229a2c-c6ce-4225-a5fa-3c9e1a1754f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-919982] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e4453b0-9993-40f7-a4cc-dab830a9bdef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17857"}}
	{"specversion":"1.0","id":"234ec464-df86-49ef-989d-178ede65c024","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dd9c5f6e-72d1-4372-9677-8880c02e6cfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig"}}
	{"specversion":"1.0","id":"3b544fa3-0158-44f6-a39b-b7656aa07f17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube"}}
	{"specversion":"1.0","id":"7fc64cbf-cac7-4cd4-9e44-41537028a60d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"48f334f4-d002-4902-8c92-fa0327ae90d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c695add5-204f-439d-b3fb-363583a97a30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"99c66871-4617-4b31-b235-a414c4d32e73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"500a99e7-32a4-4e0e-8d93-30e8422c39eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"32ea9ff5-7d78-45bc-a057-b09f9798047e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"48d92856-fca3-4d2e-810a-1300a93e9f42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-919982 in cluster insufficient-storage-919982","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1062b5af-cd25-47a3-a1ee-82817a4cc8b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1703498848-17857 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ccdc86bf-b642-498a-9f90-0dd57bb1f700","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4400bac4-b9ba-4b3c-b1e8-e01977fa9e18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-919982 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-919982 --output=json --layout=cluster: exit status 7 (318.124696ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-919982","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-919982","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 22:34:53.394390  800403 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-919982" does not appear in /home/jenkins/minikube-integration/17857-697646/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-919982 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-919982 --output=json --layout=cluster: exit status 7 (364.776343ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-919982","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-919982","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1226 22:34:53.760256  800457 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-919982" does not appear in /home/jenkins/minikube-integration/17857-697646/kubeconfig
	E1226 22:34:53.772496  800457 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/insufficient-storage-919982/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-919982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-919982
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-919982: (1.989695001s)
--- PASS: TestInsufficientStorage (13.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-091320 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-091320 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.701210528s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-091320
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-091320: (2.033150257s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-091320 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-091320 status --format={{.Host}}: exit status 7 (83.971916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-091320 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-091320 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m47.598753838s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-091320 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-091320 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-091320 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (138.073784ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-091320] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-091320
	    minikube start -p kubernetes-upgrade-091320 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0913202 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-091320 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-091320 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-091320 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.656863797s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-091320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-091320
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-091320: (2.521223008s)
--- PASS: TestKubernetesUpgrade (389.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-412850 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-412850 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (106.851393ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-412850] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-412850 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-412850 --driver=docker  --container-runtime=crio: (43.041186253s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-412850 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-412850 --no-kubernetes --driver=docker  --container-runtime=crio
E1226 22:35:49.167353  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-412850 --no-kubernetes --driver=docker  --container-runtime=crio: (17.346450686s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-412850 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-412850 status -o json: exit status 2 (367.020257ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-412850","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-412850
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-412850: (2.153136765s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-412850 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-412850 --no-kubernetes --driver=docker  --container-runtime=crio: (10.173070128s)
--- PASS: TestNoKubernetes/serial/Start (10.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-412850 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-412850 "sudo systemctl is-active --quiet service kubelet": exit status 1 (381.980313ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-412850
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-412850: (1.323305372s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-412850 --driver=docker  --container-runtime=crio
E1226 22:36:16.631659  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-412850 --driver=docker  --container-runtime=crio: (8.064674194s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-412850 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-412850 "sudo systemctl is-active --quiet service kubelet": exit status 1 (375.115515ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-572640
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestPause/serial/Start (56.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-839657 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1226 22:41:16.630741  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-839657 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (56.320360932s)
--- PASS: TestPause/serial/Start (56.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-839657 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-839657 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.074543623s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-839657 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-839657 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-839657 --output=json --layout=cluster: exit status 2 (415.163058ms)

                                                
                                                
-- stdout --
	{"Name":"pause-839657","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-839657","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.16s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-839657 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-839657 --alsologtostderr -v=5: (1.155292328s)
--- PASS: TestPause/serial/Unpause (1.16s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-839657 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-839657 --alsologtostderr -v=5: (1.878352644s)
--- PASS: TestPause/serial/PauseAgain (1.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.47s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-839657 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-839657 --alsologtostderr -v=5: (3.468954529s)
--- PASS: TestPause/serial/DeletePaused (3.47s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (12.83s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (12.76260038s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-839657
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-839657: exit status 1 (16.990917ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-839657: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (12.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-937472 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-937472 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (323.060848ms)

                                                
                                                
-- stdout --
	* [false-937472] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17857
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1226 22:42:57.851708  838488 out.go:296] Setting OutFile to fd 1 ...
	I1226 22:42:57.852076  838488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:42:57.852114  838488 out.go:309] Setting ErrFile to fd 2...
	I1226 22:42:57.852136  838488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1226 22:42:57.852511  838488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17857-697646/.minikube/bin
	I1226 22:42:57.853336  838488 out.go:303] Setting JSON to false
	I1226 22:42:57.854693  838488 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":23112,"bootTime":1703607466,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1226 22:42:57.854807  838488 start.go:138] virtualization:  
	I1226 22:42:57.858368  838488 out.go:177] * [false-937472] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1226 22:42:57.861080  838488 notify.go:220] Checking for updates...
	I1226 22:42:57.862261  838488 out.go:177]   - MINIKUBE_LOCATION=17857
	I1226 22:42:57.864422  838488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1226 22:42:57.867159  838488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17857-697646/kubeconfig
	I1226 22:42:57.869118  838488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17857-697646/.minikube
	I1226 22:42:57.870844  838488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1226 22:42:57.873016  838488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1226 22:42:57.875433  838488 config.go:182] Loaded profile config "force-systemd-flag-109501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1226 22:42:57.875639  838488 driver.go:392] Setting default libvirt URI to qemu:///system
	I1226 22:42:57.901880  838488 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1226 22:42:57.901993  838488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1226 22:42:58.055989  838488 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-26 22:42:58.038641918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1226 22:42:58.056097  838488 docker.go:295] overlay module found
	I1226 22:42:58.058456  838488 out.go:177] * Using the docker driver based on user configuration
	I1226 22:42:58.060671  838488 start.go:298] selected driver: docker
	I1226 22:42:58.060696  838488 start.go:902] validating driver "docker" against <nil>
	I1226 22:42:58.060710  838488 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1226 22:42:58.063422  838488 out.go:177] 
	W1226 22:42:58.065786  838488 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1226 22:42:58.067915  838488 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-937472 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-937472" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-937472

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-937472"

                                                
                                                
----------------------- debugLogs end: false-937472 [took: 5.620626023s] --------------------------------
helpers_test.go:175: Cleaning up "false-937472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-937472
--- PASS: TestNetworkPlugins/group/false (6.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (126.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-797449 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1226 22:46:16.632034  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-797449 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m6.494892666s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (126.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-797449 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b36d9070-be79-4b4d-90b7-7c0152740e20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b36d9070-be79-4b4d-90b7-7c0152740e20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002893798s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-797449 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-797449 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-797449 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-797449 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-797449 --alsologtostderr -v=3: (12.053393284s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-797449 -n old-k8s-version-797449
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-797449 -n old-k8s-version-797449: exit status 7 (105.99422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-797449 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (449.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-797449 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-797449 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m29.411124356s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-797449 -n old-k8s-version-797449
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (449.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-596259 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1226 22:47:55.018149  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 22:48:11.961331  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-596259 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m7.739963092s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-596259 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7fab4110-3e5d-466d-885c-b3e5beca0c72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7fab4110-3e5d-466d-885c-b3e5beca0c72] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00337698s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-596259 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-596259 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-596259 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021072112s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-596259 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-596259 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-596259 --alsologtostderr -v=3: (12.035840236s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-596259 -n no-preload-596259
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-596259 -n no-preload-596259: exit status 7 (92.634932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-596259 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (621.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-596259 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1226 22:49:26.121751  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:51:16.630845  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 22:52:29.168344  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 22:53:11.961233  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 22:54:26.121760  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-596259 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m21.096638684s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-596259 -n no-preload-596259
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (621.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-s48km" [cdbdab10-ad59-47ec-a4cb-95247b69350e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005035853s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-s48km" [cdbdab10-ad59-47ec-a4cb-95247b69350e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003785045s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-797449 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-797449 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-797449 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-797449 --alsologtostderr -v=1: (1.380401913s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-797449 -n old-k8s-version-797449
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-797449 -n old-k8s-version-797449: exit status 2 (463.665952ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-797449 -n old-k8s-version-797449
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-797449 -n old-k8s-version-797449: exit status 2 (512.561677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-797449 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-797449 -n old-k8s-version-797449
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-797449 -n old-k8s-version-797449
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-479602 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-479602 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m19.24695909s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-479602 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ba6f026-07c2-4a9e-a625-69f3bde037e6] Pending
helpers_test.go:344: "busybox" [0ba6f026-07c2-4a9e-a625-69f3bde037e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0ba6f026-07c2-4a9e-a625-69f3bde037e6] Running
E1226 22:56:16.631506  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00326796s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-479602 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-479602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-479602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.118134711s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-479602 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-479602 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-479602 --alsologtostderr -v=3: (12.004327906s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-479602 -n embed-certs-479602
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-479602 -n embed-certs-479602: exit status 7 (100.872202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-479602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (624.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-479602 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1226 22:56:39.304713  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:39.310194  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:39.320550  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:39.340894  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:39.381307  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:39.461695  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:39.622130  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:39.942309  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:40.582906  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:41.863619  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:44.424326  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:49.545003  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:56:59.785195  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:57:20.265765  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:58:01.226913  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 22:58:11.961313  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-479602 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m23.83395844s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-479602 -n embed-certs-479602
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (624.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jbp67" [4dc9d4bf-2c99-4596-812c-687ced9740bb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004328328s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jbp67" [4dc9d4bf-2c99-4596-812c-687ced9740bb] Running
E1226 22:59:23.147980  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004176859s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-596259 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-596259 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-596259 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-596259 -n no-preload-596259
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-596259 -n no-preload-596259: exit status 2 (390.840123ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-596259 -n no-preload-596259
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-596259 -n no-preload-596259: exit status 2 (367.675365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-596259 --alsologtostderr -v=1
E1226 22:59:26.121948  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-596259 -n no-preload-596259
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-596259 -n no-preload-596259
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-175689 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-175689 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m21.967755816s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-175689 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [98b785e8-2314-4078-bd4e-29bd622619e6] Pending
helpers_test.go:344: "busybox" [98b785e8-2314-4078-bd4e-29bd622619e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [98b785e8-2314-4078-bd4e-29bd622619e6] Running
E1226 23:00:59.676988  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004324874s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-175689 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-175689 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-175689 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.137864137s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-175689 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-175689 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-175689 --alsologtostderr -v=3: (12.06610862s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-175689 -n default-k8s-diff-port-175689
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-175689 -n default-k8s-diff-port-175689: exit status 7 (89.237763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-175689 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (625.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-175689 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1226 23:01:16.631452  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 23:01:39.305384  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 23:02:06.988918  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 23:03:11.961775  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 23:03:27.504809  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:27.510069  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:27.520544  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:27.540879  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:27.581132  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:27.661423  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:27.821831  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:28.142394  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:28.782922  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:30.064097  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:32.624837  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:37.745656  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:03:47.986514  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:04:08.466963  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:04:26.121141  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 23:04:35.018427  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 23:04:49.428030  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:06:11.348881  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:06:16.631503  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 23:06:39.305425  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-175689 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m25.363564317s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-175689 -n default-k8s-diff-port-175689
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (625.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sddc6" [2a0c579b-db36-4b05-8b13-4c0fa9a49223] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005838749s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sddc6" [2a0c579b-db36-4b05-8b13-4c0fa9a49223] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003912125s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-479602 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-479602 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-479602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-479602 --alsologtostderr -v=1: (1.24364227s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-479602 -n embed-certs-479602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-479602 -n embed-certs-479602: exit status 2 (604.42828ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-479602 -n embed-certs-479602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-479602 -n embed-certs-479602: exit status 2 (652.410805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-479602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-479602 --alsologtostderr -v=1: (1.248364098s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-479602 -n embed-certs-479602
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-479602 -n embed-certs-479602
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-376113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-376113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (46.078406167s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-376113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-376113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.064561061s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-376113 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-376113 --alsologtostderr -v=3: (1.276530041s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-376113 -n newest-cni-376113
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-376113 -n newest-cni-376113: exit status 7 (97.648042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-376113 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-376113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1226 23:08:11.961883  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
E1226 23:08:27.505691  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-376113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (31.157778062s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-376113 -n newest-cni-376113
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-376113 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-376113 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-376113 -n newest-cni-376113
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-376113 -n newest-cni-376113: exit status 2 (449.665614ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-376113 -n newest-cni-376113
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-376113 -n newest-cni-376113: exit status 2 (406.13659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-376113 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-376113 -n newest-cni-376113
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-376113 -n newest-cni-376113
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1226 23:08:55.189058  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
E1226 23:09:09.169126  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
E1226 23:09:26.121715  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/ingress-addon-legacy-324559/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m18.49408903s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-937472 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-937472 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vrlgf" [196f6afb-9557-408c-94ee-0709e1339fcb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vrlgf" [196f6afb-9557-408c-94ee-0709e1339fcb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004524083s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-937472 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1226 23:11:16.630862  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 23:11:39.305324  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.145607894s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dq6zb" [af327af1-acb3-4c73-988d-2b3d9037f4f8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003830141s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dq6zb" [af327af1-acb3-4c73-988d-2b3d9037f4f8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003767269s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-175689 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-175689 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-175689 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-175689 -n default-k8s-diff-port-175689
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-175689 -n default-k8s-diff-port-175689: exit status 2 (367.88411ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-175689 -n default-k8s-diff-port-175689
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-175689 -n default-k8s-diff-port-175689: exit status 2 (359.547046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-175689 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-175689 -n default-k8s-diff-port-175689
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-175689 -n default-k8s-diff-port-175689
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.55s)
E1226 23:16:39.305263  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 23:17:01.960058  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:01.966213  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:01.976480  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:01.996800  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:02.037067  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:02.117622  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:02.278012  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:02.598897  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:03.239643  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:04.520112  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:07.081188  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:12.201752  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:15.244655  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
E1226 23:17:22.442100  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory
E1226 23:17:39.677424  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/functional-262391/client.crt: no such file or directory
E1226 23:17:42.922831  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/kindnet-937472/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m21.897311737s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-n76mw" [25858c6c-6302-4f15-b58b-303a919a60c0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005280959s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-937472 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-937472 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8rb6t" [a20b098a-a81f-4ab2-b611-98c848ad773d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8rb6t" [a20b098a-a81f-4ab2-b611-98c848ad773d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004552241s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-937472 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1226 23:13:02.349672  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/old-k8s-version-797449/client.crt: no such file or directory
E1226 23:13:11.961020  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/addons-154736/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m13.168381594s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2tbp4" [5f1651b3-791e-4dfd-9d09-00fb1fddb0ee] Running
E1226 23:13:27.505676  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/no-preload-596259/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005493447s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-937472 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-937472 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zxxpt" [6323458d-aa7c-4f53-9b80-864a3352603f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zxxpt" [6323458d-aa7c-4f53-9b80-864a3352603f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004994086s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-937472 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-937472 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-937472 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t4sjp" [5555f9e8-a5b4-4ff4-adc0-51388598ac22] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t4sjp" [5555f9e8-a5b4-4ff4-adc0-51388598ac22] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00635724s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m39.286372657s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-937472 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1226 23:15:05.946624  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:05.951846  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:05.962122  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:05.982417  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:06.022668  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:06.102957  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:06.263593  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:06.584176  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:07.225131  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:08.506095  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:11.067058  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:16.187355  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
E1226 23:15:26.428010  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.909913784s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-937472 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-937472 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-86gw5" [c6cd20ba-d2c5-4553-991c-a0618d16571f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1226 23:15:46.909009  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-86gw5" [c6cd20ba-d2c5-4553-991c-a0618d16571f] Running
E1226 23:15:53.321113  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
E1226 23:15:53.326376  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
E1226 23:15:53.336757  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
E1226 23:15:53.357098  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
E1226 23:15:53.397408  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
E1226 23:15:53.478442  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
E1226 23:15:53.639035  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
E1226 23:15:53.960178  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004641444s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-m8k7r" [d1e07350-9ba7-4506-b3c9-9ff69f90db1e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003930802s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-937472 "pgrep -a kubelet"
E1226 23:15:54.600605  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-937472 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-222kv" [e97f1432-515c-46cf-a981-e16982cb6a84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1226 23:15:55.880810  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-222kv" [e97f1432-515c-46cf-a981-e16982cb6a84] Running
E1226 23:16:03.562796  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/default-k8s-diff-port-175689/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004656613s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-937472 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-937472 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1226 23:16:27.869577  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-937472 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m26.414507894s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-937472 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-937472 replace --force -f testdata/netcat-deployment.yaml
E1226 23:17:49.791369  703036 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17857-697646/.minikube/profiles/auto-937472/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-87tv4" [f4b0ac29-7370-499a-9e7e-6656ebee14a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-87tv4" [f4b0ac29-7370-499a-9e7e-6656ebee14a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.008081479s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-937472 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-937472 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    

Test skip (32/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.64s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-374836 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-374836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-374836
--- SKIP: TestDownloadOnlyKic (0.64s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-926969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-926969
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-937472 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-937472" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-937472

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-937472"

                                                
                                                
----------------------- debugLogs end: kubenet-937472 [took: 5.007565496s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-937472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-937472
--- SKIP: TestNetworkPlugins/group/kubenet (5.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-937472 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-937472" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-937472

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-937472" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-937472"

                                                
                                                
----------------------- debugLogs end: cilium-937472 [took: 5.925142168s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-937472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-937472
--- SKIP: TestNetworkPlugins/group/cilium (6.19s)

                                                
                                    
Copied to clipboard