Test Report: Docker_Linux_crio_arm64 17953

                    
                      eb30bbcea83871e91962f38accf20a5558557b42:2024-01-15:32709
                    
                

Test fail (3/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 166.59
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 183.29
221 TestMultiNode/serial/PingHostFrom2Pods 4.27
x
+
TestAddons/parallel/Ingress (166.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-944407 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-944407 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-944407 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0abf509b-0c38-4776-8f32-f535a8dd73ba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0abf509b-0c38-4776-8f32-f535a8dd73ba] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004367409s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-944407 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.685993433s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-944407 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.065603345s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-944407 addons disable ingress --alsologtostderr -v=1: (7.77287017s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-944407
helpers_test.go:235: (dbg) docker inspect addons-944407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "90c026bdaba86d746bd70dec33036acce0cc826277b7b6e40b57304c0debc0ba",
	        "Created": "2024-01-15T10:51:32.250571961Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1631692,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T10:51:32.579403788Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/90c026bdaba86d746bd70dec33036acce0cc826277b7b6e40b57304c0debc0ba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/90c026bdaba86d746bd70dec33036acce0cc826277b7b6e40b57304c0debc0ba/hostname",
	        "HostsPath": "/var/lib/docker/containers/90c026bdaba86d746bd70dec33036acce0cc826277b7b6e40b57304c0debc0ba/hosts",
	        "LogPath": "/var/lib/docker/containers/90c026bdaba86d746bd70dec33036acce0cc826277b7b6e40b57304c0debc0ba/90c026bdaba86d746bd70dec33036acce0cc826277b7b6e40b57304c0debc0ba-json.log",
	        "Name": "/addons-944407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-944407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-944407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/76d1e5791a4b9f30529fbf5f1d4207d0296e5cac2e2fe2abd84f83cd4c4cc809-init/diff:/var/lib/docker/overlay2/875764cb66056ccf89d3b82171ed27a7d9d817926a8469405b5a9bf1621232cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/76d1e5791a4b9f30529fbf5f1d4207d0296e5cac2e2fe2abd84f83cd4c4cc809/merged",
	                "UpperDir": "/var/lib/docker/overlay2/76d1e5791a4b9f30529fbf5f1d4207d0296e5cac2e2fe2abd84f83cd4c4cc809/diff",
	                "WorkDir": "/var/lib/docker/overlay2/76d1e5791a4b9f30529fbf5f1d4207d0296e5cac2e2fe2abd84f83cd4c4cc809/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-944407",
	                "Source": "/var/lib/docker/volumes/addons-944407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-944407",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-944407",
	                "name.minikube.sigs.k8s.io": "addons-944407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e5f00e5a21857c215c6f16ee98a8cf8204fe9f3289be3be2ce343081a2a16a5e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34719"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34718"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34715"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34717"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34716"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e5f00e5a2185",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-944407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "90c026bdaba8",
	                        "addons-944407"
	                    ],
	                    "NetworkID": "d9edbfcd6109066506aeb3cef4c40dcb1607adfce5f3f50d3830dc078be0d33a",
	                    "EndpointID": "c977a820656e46a9848a8bc259e9b8707a771eb6af8b3b822bf48e32f8a8aca5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-944407 -n addons-944407
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-944407 logs -n 25: (1.620776339s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-172231                                                                     | download-only-172231   | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC | 15 Jan 24 10:51 UTC |
	| delete  | -p download-only-982144                                                                     | download-only-982144   | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC | 15 Jan 24 10:51 UTC |
	| delete  | -p download-only-492820                                                                     | download-only-492820   | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC | 15 Jan 24 10:51 UTC |
	| start   | --download-only -p                                                                          | download-docker-693615 | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC |                     |
	|         | download-docker-693615                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-693615                                                                   | download-docker-693615 | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC | 15 Jan 24 10:51 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-204729   | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC |                     |
	|         | binary-mirror-204729                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38859                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-204729                                                                     | binary-mirror-204729   | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC | 15 Jan 24 10:51 UTC |
	| addons  | disable dashboard -p                                                                        | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC |                     |
	|         | addons-944407                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC |                     |
	|         | addons-944407                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-944407 --wait=true                                                                | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:51 UTC | 15 Jan 24 10:53 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-944407 ip                                                                            | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:54 UTC | 15 Jan 24 10:54 UTC |
	| addons  | addons-944407 addons disable                                                                | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:54 UTC | 15 Jan 24 10:54 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-944407 addons                                                                        | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:54 UTC | 15 Jan 24 10:54 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:54 UTC | 15 Jan 24 10:54 UTC |
	|         | addons-944407                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-944407 ssh curl -s                                                                   | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:54 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-944407 addons                                                                        | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:54 UTC | 15 Jan 24 10:55 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-944407 addons                                                                        | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:55 UTC | 15 Jan 24 10:55 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:55 UTC | 15 Jan 24 10:55 UTC |
	|         | -p addons-944407                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-944407 ssh cat                                                                       | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:55 UTC | 15 Jan 24 10:55 UTC |
	|         | /opt/local-path-provisioner/pvc-3f6288d4-f87d-452a-a480-4172734919f2_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-944407 addons disable                                                                | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:55 UTC | 15 Jan 24 10:55 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:55 UTC | 15 Jan 24 10:55 UTC |
	|         | addons-944407                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:55 UTC | 15 Jan 24 10:55 UTC |
	|         | -p addons-944407                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-944407 ip                                                                            | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:56 UTC | 15 Jan 24 10:56 UTC |
	| addons  | addons-944407 addons disable                                                                | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC | 15 Jan 24 10:57 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-944407 addons disable                                                                | addons-944407          | jenkins | v1.32.0 | 15 Jan 24 10:57 UTC | 15 Jan 24 10:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:51:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:51:09.312904 1631243 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:51:09.313080 1631243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:51:09.313090 1631243 out.go:309] Setting ErrFile to fd 2...
	I0115 10:51:09.313096 1631243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:51:09.313357 1631243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 10:51:09.313830 1631243 out.go:303] Setting JSON to false
	I0115 10:51:09.314705 1631243 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34411,"bootTime":1705281458,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0115 10:51:09.314780 1631243 start.go:138] virtualization:  
	I0115 10:51:09.317276 1631243 out.go:177] * [addons-944407] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 10:51:09.319735 1631243 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 10:51:09.321658 1631243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:51:09.319882 1631243 notify.go:220] Checking for updates...
	I0115 10:51:09.326181 1631243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 10:51:09.328023 1631243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	I0115 10:51:09.329695 1631243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 10:51:09.331294 1631243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 10:51:09.333204 1631243 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:51:09.361490 1631243 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 10:51:09.361621 1631243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 10:51:09.450268 1631243 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-15 10:51:09.440233903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 10:51:09.450446 1631243 docker.go:295] overlay module found
	I0115 10:51:09.453645 1631243 out.go:177] * Using the docker driver based on user configuration
	I0115 10:51:09.455138 1631243 start.go:298] selected driver: docker
	I0115 10:51:09.455155 1631243 start.go:902] validating driver "docker" against <nil>
	I0115 10:51:09.455169 1631243 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 10:51:09.455836 1631243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 10:51:09.518567 1631243 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-15 10:51:09.508988802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 10:51:09.518745 1631243 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 10:51:09.519001 1631243 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 10:51:09.520862 1631243 out.go:177] * Using Docker driver with root privileges
	I0115 10:51:09.522525 1631243 cni.go:84] Creating CNI manager for ""
	I0115 10:51:09.522549 1631243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 10:51:09.522561 1631243 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 10:51:09.522576 1631243 start_flags.go:321] config:
	{Name:addons-944407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-944407 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:51:09.525882 1631243 out.go:177] * Starting control plane node addons-944407 in cluster addons-944407
	I0115 10:51:09.527971 1631243 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 10:51:09.530182 1631243 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 10:51:09.531866 1631243 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:51:09.531915 1631243 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0115 10:51:09.531939 1631243 cache.go:56] Caching tarball of preloaded images
	I0115 10:51:09.532019 1631243 preload.go:174] Found /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0115 10:51:09.532035 1631243 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:51:09.532370 1631243 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/config.json ...
	I0115 10:51:09.532399 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/config.json: {Name:mk2eaf8b362e46095d84bdb554cc6eb042210f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:09.532566 1631243 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 10:51:09.549758 1631243 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 10:51:09.549905 1631243 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 10:51:09.549931 1631243 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 10:51:09.549937 1631243 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 10:51:09.549948 1631243 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 10:51:09.549957 1631243 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0115 10:51:25.550863 1631243 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0115 10:51:25.550911 1631243 cache.go:194] Successfully downloaded all kic artifacts
	I0115 10:51:25.550978 1631243 start.go:365] acquiring machines lock for addons-944407: {Name:mk6c57d857390e483da3fbf2c1e58c5d1b4767d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 10:51:25.551112 1631243 start.go:369] acquired machines lock for "addons-944407" in 114.443µs
	I0115 10:51:25.551138 1631243 start.go:93] Provisioning new machine with config: &{Name:addons-944407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-944407 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:51:25.551219 1631243 start.go:125] createHost starting for "" (driver="docker")
	I0115 10:51:25.553909 1631243 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0115 10:51:25.554163 1631243 start.go:159] libmachine.API.Create for "addons-944407" (driver="docker")
	I0115 10:51:25.554199 1631243 client.go:168] LocalClient.Create starting
	I0115 10:51:25.554343 1631243 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem
	I0115 10:51:25.774929 1631243 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem
	I0115 10:51:26.085459 1631243 cli_runner.go:164] Run: docker network inspect addons-944407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 10:51:26.105073 1631243 cli_runner.go:211] docker network inspect addons-944407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 10:51:26.105195 1631243 network_create.go:281] running [docker network inspect addons-944407] to gather additional debugging logs...
	I0115 10:51:26.105222 1631243 cli_runner.go:164] Run: docker network inspect addons-944407
	W0115 10:51:26.123249 1631243 cli_runner.go:211] docker network inspect addons-944407 returned with exit code 1
	I0115 10:51:26.123284 1631243 network_create.go:284] error running [docker network inspect addons-944407]: docker network inspect addons-944407: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-944407 not found
	I0115 10:51:26.123298 1631243 network_create.go:286] output of [docker network inspect addons-944407]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-944407 not found
	
	** /stderr **
	I0115 10:51:26.123393 1631243 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 10:51:26.141657 1631243 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40028b9a70}
	I0115 10:51:26.141699 1631243 network_create.go:124] attempt to create docker network addons-944407 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0115 10:51:26.141763 1631243 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-944407 addons-944407
	I0115 10:51:26.212206 1631243 network_create.go:108] docker network addons-944407 192.168.49.0/24 created
	I0115 10:51:26.212245 1631243 kic.go:121] calculated static IP "192.168.49.2" for the "addons-944407" container
	I0115 10:51:26.212331 1631243 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 10:51:26.229344 1631243 cli_runner.go:164] Run: docker volume create addons-944407 --label name.minikube.sigs.k8s.io=addons-944407 --label created_by.minikube.sigs.k8s.io=true
	I0115 10:51:26.248562 1631243 oci.go:103] Successfully created a docker volume addons-944407
	I0115 10:51:26.248656 1631243 cli_runner.go:164] Run: docker run --rm --name addons-944407-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-944407 --entrypoint /usr/bin/test -v addons-944407:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 10:51:27.949476 1631243 cli_runner.go:217] Completed: docker run --rm --name addons-944407-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-944407 --entrypoint /usr/bin/test -v addons-944407:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.700770458s)
	I0115 10:51:27.949506 1631243 oci.go:107] Successfully prepared a docker volume addons-944407
	I0115 10:51:27.949534 1631243 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:51:27.949553 1631243 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 10:51:27.949649 1631243 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-944407:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 10:51:32.158541 1631243 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-944407:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.208851642s)
	I0115 10:51:32.158582 1631243 kic.go:203] duration metric: took 4.209019 seconds to extract preloaded images to volume
	W0115 10:51:32.158725 1631243 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 10:51:32.158854 1631243 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 10:51:32.234307 1631243 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-944407 --name addons-944407 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-944407 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-944407 --network addons-944407 --ip 192.168.49.2 --volume addons-944407:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 10:51:32.589894 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Running}}
	I0115 10:51:32.611438 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:51:32.632061 1631243 cli_runner.go:164] Run: docker exec addons-944407 stat /var/lib/dpkg/alternatives/iptables
	I0115 10:51:32.707520 1631243 oci.go:144] the created container "addons-944407" has a running status.
	I0115 10:51:32.707550 1631243 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa...
	I0115 10:51:33.289960 1631243 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 10:51:33.316792 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:51:33.344594 1631243 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 10:51:33.344620 1631243 kic_runner.go:114] Args: [docker exec --privileged addons-944407 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 10:51:33.418111 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:51:33.446426 1631243 machine.go:88] provisioning docker machine ...
	I0115 10:51:33.446466 1631243 ubuntu.go:169] provisioning hostname "addons-944407"
	I0115 10:51:33.446530 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:51:33.468889 1631243 main.go:141] libmachine: Using SSH client type: native
	I0115 10:51:33.469313 1631243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34719 <nil> <nil>}
	I0115 10:51:33.469331 1631243 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-944407 && echo "addons-944407" | sudo tee /etc/hostname
	I0115 10:51:33.678605 1631243 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-944407
	
	I0115 10:51:33.678709 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:51:33.699856 1631243 main.go:141] libmachine: Using SSH client type: native
	I0115 10:51:33.700268 1631243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34719 <nil> <nil>}
	I0115 10:51:33.700296 1631243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-944407' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-944407/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-944407' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 10:51:33.848006 1631243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 10:51:33.848035 1631243 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17953-1625104/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-1625104/.minikube}
	I0115 10:51:33.848059 1631243 ubuntu.go:177] setting up certificates
	I0115 10:51:33.848069 1631243 provision.go:83] configureAuth start
	I0115 10:51:33.848132 1631243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-944407
	I0115 10:51:33.867798 1631243 provision.go:138] copyHostCerts
	I0115 10:51:33.867884 1631243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem (1082 bytes)
	I0115 10:51:33.867985 1631243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem (1123 bytes)
	I0115 10:51:33.868045 1631243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem (1675 bytes)
	I0115 10:51:33.868086 1631243 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem org=jenkins.addons-944407 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-944407]
	I0115 10:51:34.448563 1631243 provision.go:172] copyRemoteCerts
	I0115 10:51:34.448650 1631243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 10:51:34.448694 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:51:34.469581 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:51:34.569751 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 10:51:34.600866 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0115 10:51:34.628950 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 10:51:34.656915 1631243 provision.go:86] duration metric: configureAuth took 808.832503ms
	I0115 10:51:34.656941 1631243 ubuntu.go:193] setting minikube options for container-runtime
	I0115 10:51:34.657139 1631243 config.go:182] Loaded profile config "addons-944407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:51:34.657251 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:51:34.676096 1631243 main.go:141] libmachine: Using SSH client type: native
	I0115 10:51:34.676529 1631243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34719 <nil> <nil>}
	I0115 10:51:34.676550 1631243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 10:51:34.930348 1631243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 10:51:34.930372 1631243 machine.go:91] provisioned docker machine in 1.483924752s
	I0115 10:51:34.930395 1631243 client.go:171] LocalClient.Create took 9.376173557s
	I0115 10:51:34.930411 1631243 start.go:167] duration metric: libmachine.API.Create for "addons-944407" took 9.376250002s
	I0115 10:51:34.930418 1631243 start.go:300] post-start starting for "addons-944407" (driver="docker")
	I0115 10:51:34.930428 1631243 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 10:51:34.930499 1631243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 10:51:34.930545 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:51:34.949540 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:51:35.049699 1631243 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 10:51:35.054000 1631243 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 10:51:35.054041 1631243 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 10:51:35.054054 1631243 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 10:51:35.054061 1631243 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 10:51:35.054072 1631243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-1625104/.minikube/addons for local assets ...
	I0115 10:51:35.054138 1631243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-1625104/.minikube/files for local assets ...
	I0115 10:51:35.054167 1631243 start.go:303] post-start completed in 123.743724ms
	I0115 10:51:35.054528 1631243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-944407
	I0115 10:51:35.073026 1631243 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/config.json ...
	I0115 10:51:35.073324 1631243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 10:51:35.073383 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:51:35.092563 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:51:35.188723 1631243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 10:51:35.194617 1631243 start.go:128] duration metric: createHost completed in 9.643382537s
	I0115 10:51:35.194644 1631243 start.go:83] releasing machines lock for "addons-944407", held for 9.643523407s
	I0115 10:51:35.194722 1631243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-944407
	I0115 10:51:35.215864 1631243 ssh_runner.go:195] Run: cat /version.json
	I0115 10:51:35.215924 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:51:35.216198 1631243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 10:51:35.216256 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:51:35.239840 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:51:35.242355 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:51:35.470135 1631243 ssh_runner.go:195] Run: systemctl --version
	I0115 10:51:35.475740 1631243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 10:51:35.623342 1631243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 10:51:35.629004 1631243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:51:35.651590 1631243 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0115 10:51:35.651679 1631243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 10:51:35.690683 1631243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 10:51:35.690708 1631243 start.go:475] detecting cgroup driver to use...
	I0115 10:51:35.690746 1631243 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 10:51:35.690800 1631243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 10:51:35.710508 1631243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 10:51:35.725235 1631243 docker.go:217] disabling cri-docker service (if available) ...
	I0115 10:51:35.725350 1631243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 10:51:35.741398 1631243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 10:51:35.758612 1631243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 10:51:35.865971 1631243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 10:51:35.966612 1631243 docker.go:233] disabling docker service ...
	I0115 10:51:35.966717 1631243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 10:51:35.988329 1631243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 10:51:36.003742 1631243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 10:51:36.113465 1631243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 10:51:36.222251 1631243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 10:51:36.235943 1631243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 10:51:36.259369 1631243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 10:51:36.259440 1631243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:51:36.272010 1631243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 10:51:36.272149 1631243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:51:36.284564 1631243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:51:36.296627 1631243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 10:51:36.308217 1631243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 10:51:36.318808 1631243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 10:51:36.329290 1631243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 10:51:36.339689 1631243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 10:51:36.433328 1631243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 10:51:36.553443 1631243 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 10:51:36.553589 1631243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 10:51:36.558460 1631243 start.go:543] Will wait 60s for crictl version
	I0115 10:51:36.558571 1631243 ssh_runner.go:195] Run: which crictl
	I0115 10:51:36.563014 1631243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 10:51:36.605133 1631243 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0115 10:51:36.605269 1631243 ssh_runner.go:195] Run: crio --version
	I0115 10:51:36.650578 1631243 ssh_runner.go:195] Run: crio --version
	I0115 10:51:36.698189 1631243 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0115 10:51:36.699767 1631243 cli_runner.go:164] Run: docker network inspect addons-944407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 10:51:36.718449 1631243 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0115 10:51:36.723259 1631243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:51:36.737318 1631243 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:51:36.737389 1631243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:51:36.806703 1631243 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:51:36.806729 1631243 crio.go:415] Images already preloaded, skipping extraction
	I0115 10:51:36.806783 1631243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 10:51:36.851629 1631243 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 10:51:36.851691 1631243 cache_images.go:84] Images are preloaded, skipping loading
	I0115 10:51:36.851772 1631243 ssh_runner.go:195] Run: crio config
	I0115 10:51:36.906716 1631243 cni.go:84] Creating CNI manager for ""
	I0115 10:51:36.906738 1631243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 10:51:36.906769 1631243 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 10:51:36.906789 1631243 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-944407 NodeName:addons-944407 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 10:51:36.906937 1631243 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-944407"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 10:51:36.907001 1631243 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-944407 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-944407 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 10:51:36.907070 1631243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 10:51:36.917823 1631243 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 10:51:36.917920 1631243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 10:51:36.928533 1631243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0115 10:51:36.949516 1631243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 10:51:36.970822 1631243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0115 10:51:36.991692 1631243 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0115 10:51:36.996124 1631243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 10:51:37.012459 1631243 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407 for IP: 192.168.49.2
	I0115 10:51:37.012508 1631243 certs.go:190] acquiring lock for shared ca certs: {Name:mk2a63925baba8534769a012921a3873667cd449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:37.012653 1631243 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key
	I0115 10:51:37.674791 1631243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt ...
	I0115 10:51:37.674827 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt: {Name:mkc1f586a5aec9323f3bbb34da9317547ed7ea92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:37.675053 1631243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key ...
	I0115 10:51:37.675070 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key: {Name:mk638b67e9768c8be1d47ad9504eb9204202d0ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:37.675780 1631243 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key
	I0115 10:51:38.034833 1631243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.crt ...
	I0115 10:51:38.034865 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.crt: {Name:mk2fb708cce9e877b1070d238228b9c2cace90fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:38.035063 1631243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key ...
	I0115 10:51:38.035076 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key: {Name:mk4364edd5d401d6b3e0af5ccfbb5807f28fdc90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:38.035234 1631243 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.key
	I0115 10:51:38.035254 1631243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt with IP's: []
	I0115 10:51:38.658120 1631243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt ...
	I0115 10:51:38.658150 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: {Name:mkfa888a23f172eb4918b1834321cea0e7a210ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:38.658938 1631243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.key ...
	I0115 10:51:38.658954 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.key: {Name:mka9e6bb548419f616dea83a2a47d856beaab633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:38.659586 1631243 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.key.dd3b5fb2
	I0115 10:51:38.659608 1631243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 10:51:39.530986 1631243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.crt.dd3b5fb2 ...
	I0115 10:51:39.531017 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.crt.dd3b5fb2: {Name:mk3873e78fdd333ddf3d26d8ea69cba65d42dd47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:39.531740 1631243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.key.dd3b5fb2 ...
	I0115 10:51:39.531760 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.key.dd3b5fb2: {Name:mka1c9ba56102975bf9bbdb141a2493bf45bd0d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:39.531854 1631243 certs.go:337] copying /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.crt
	I0115 10:51:39.531939 1631243 certs.go:341] copying /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.key
	I0115 10:51:39.531999 1631243 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/proxy-client.key
	I0115 10:51:39.532027 1631243 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/proxy-client.crt with IP's: []
	I0115 10:51:40.688164 1631243 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/proxy-client.crt ...
	I0115 10:51:40.688194 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/proxy-client.crt: {Name:mk4b2f5d1312d10a8d1524e61873ae52c4976d72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:40.688942 1631243 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/proxy-client.key ...
	I0115 10:51:40.688959 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/proxy-client.key: {Name:mk59459fa1616cacf47ccf4b745af7dc4513d945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:51:40.689728 1631243 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 10:51:40.689775 1631243 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem (1082 bytes)
	I0115 10:51:40.689814 1631243 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem (1123 bytes)
	I0115 10:51:40.689849 1631243 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem (1675 bytes)
	I0115 10:51:40.690485 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 10:51:40.720531 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 10:51:40.749943 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 10:51:40.778405 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 10:51:40.806792 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 10:51:40.834605 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 10:51:40.862045 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 10:51:40.890035 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 10:51:40.917640 1631243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 10:51:40.945220 1631243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 10:51:40.965613 1631243 ssh_runner.go:195] Run: openssl version
	I0115 10:51:40.973653 1631243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 10:51:40.985438 1631243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:51:40.990099 1631243 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:51:40.990166 1631243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 10:51:40.998912 1631243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 10:51:41.011682 1631243 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 10:51:41.016346 1631243 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 10:51:41.016397 1631243 kubeadm.go:404] StartCluster: {Name:addons-944407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-944407 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:51:41.016514 1631243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 10:51:41.016603 1631243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 10:51:41.060050 1631243 cri.go:89] found id: ""
	I0115 10:51:41.060121 1631243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 10:51:41.071591 1631243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 10:51:41.082260 1631243 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 10:51:41.082350 1631243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 10:51:41.093116 1631243 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 10:51:41.093202 1631243 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 10:51:41.149193 1631243 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 10:51:41.149470 1631243 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 10:51:41.202851 1631243 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 10:51:41.202922 1631243 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0115 10:51:41.202958 1631243 kubeadm.go:322] OS: Linux
	I0115 10:51:41.203006 1631243 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 10:51:41.203054 1631243 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 10:51:41.203102 1631243 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 10:51:41.203154 1631243 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 10:51:41.203204 1631243 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 10:51:41.203259 1631243 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 10:51:41.203305 1631243 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0115 10:51:41.203353 1631243 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0115 10:51:41.203402 1631243 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0115 10:51:41.286310 1631243 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 10:51:41.286432 1631243 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 10:51:41.286547 1631243 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 10:51:41.547732 1631243 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 10:51:41.551492 1631243 out.go:204]   - Generating certificates and keys ...
	I0115 10:51:41.551678 1631243 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 10:51:41.551769 1631243 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 10:51:42.065758 1631243 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 10:51:42.543339 1631243 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 10:51:43.068653 1631243 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 10:51:43.332280 1631243 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 10:51:44.669578 1631243 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 10:51:44.669920 1631243 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-944407 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 10:51:45.113672 1631243 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 10:51:45.114070 1631243 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-944407 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 10:51:45.439714 1631243 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 10:51:46.034313 1631243 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 10:51:46.577601 1631243 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 10:51:46.577936 1631243 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 10:51:47.472099 1631243 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 10:51:47.853095 1631243 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 10:51:48.084870 1631243 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 10:51:48.661511 1631243 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 10:51:48.662576 1631243 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 10:51:48.665722 1631243 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 10:51:48.668117 1631243 out.go:204]   - Booting up control plane ...
	I0115 10:51:48.668217 1631243 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 10:51:48.668295 1631243 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 10:51:48.669006 1631243 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 10:51:48.679430 1631243 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 10:51:48.680399 1631243 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 10:51:48.680648 1631243 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 10:51:48.790719 1631243 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 10:51:55.793243 1631243 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003266 seconds
	I0115 10:51:55.793362 1631243 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 10:51:55.806407 1631243 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 10:51:56.328701 1631243 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 10:51:56.328893 1631243 kubeadm.go:322] [mark-control-plane] Marking the node addons-944407 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 10:51:56.840408 1631243 kubeadm.go:322] [bootstrap-token] Using token: 1lzua0.0p7zk6h3ra0hooy4
	I0115 10:51:56.842594 1631243 out.go:204]   - Configuring RBAC rules ...
	I0115 10:51:56.842729 1631243 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 10:51:56.848887 1631243 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 10:51:56.856893 1631243 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 10:51:56.860818 1631243 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 10:51:56.865329 1631243 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 10:51:56.869075 1631243 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 10:51:56.883157 1631243 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 10:51:57.127105 1631243 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 10:51:57.276586 1631243 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 10:51:57.277817 1631243 kubeadm.go:322] 
	I0115 10:51:57.277888 1631243 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 10:51:57.277900 1631243 kubeadm.go:322] 
	I0115 10:51:57.277973 1631243 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 10:51:57.277981 1631243 kubeadm.go:322] 
	I0115 10:51:57.278005 1631243 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 10:51:57.278064 1631243 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 10:51:57.278114 1631243 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 10:51:57.278121 1631243 kubeadm.go:322] 
	I0115 10:51:57.278172 1631243 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 10:51:57.278189 1631243 kubeadm.go:322] 
	I0115 10:51:57.278234 1631243 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 10:51:57.278243 1631243 kubeadm.go:322] 
	I0115 10:51:57.278302 1631243 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 10:51:57.278375 1631243 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 10:51:57.278451 1631243 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 10:51:57.278460 1631243 kubeadm.go:322] 
	I0115 10:51:57.278538 1631243 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 10:51:57.278612 1631243 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 10:51:57.278620 1631243 kubeadm.go:322] 
	I0115 10:51:57.278698 1631243 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1lzua0.0p7zk6h3ra0hooy4 \
	I0115 10:51:57.278797 1631243 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 \
	I0115 10:51:57.278820 1631243 kubeadm.go:322] 	--control-plane 
	I0115 10:51:57.278829 1631243 kubeadm.go:322] 
	I0115 10:51:57.278907 1631243 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 10:51:57.278916 1631243 kubeadm.go:322] 
	I0115 10:51:57.278992 1631243 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1lzua0.0p7zk6h3ra0hooy4 \
	I0115 10:51:57.279089 1631243 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 
	I0115 10:51:57.283140 1631243 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0115 10:51:57.283301 1631243 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 10:51:57.283339 1631243 cni.go:84] Creating CNI manager for ""
	I0115 10:51:57.283353 1631243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 10:51:57.287342 1631243 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 10:51:57.289430 1631243 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 10:51:57.306977 1631243 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 10:51:57.307002 1631243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 10:51:57.342185 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 10:51:58.213973 1631243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 10:51:58.214107 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:51:58.214183 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=addons-944407 minikube.k8s.io/updated_at=2024_01_15T10_51_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:51:58.376612 1631243 ops.go:34] apiserver oom_adj: -16
	I0115 10:51:58.376701 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:51:58.876877 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:51:59.377794 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:51:59.877094 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:00.377464 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:00.876842 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:01.376999 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:01.876895 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:02.377722 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:02.877713 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:03.377550 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:03.877468 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:04.377427 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:04.877471 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:05.377482 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:05.877516 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:06.377553 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:06.876919 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:07.377070 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:07.877543 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:08.377395 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:08.877717 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:09.377358 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:09.877309 1631243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 10:52:09.990547 1631243 kubeadm.go:1088] duration metric: took 11.776486231s to wait for elevateKubeSystemPrivileges.
	I0115 10:52:09.990577 1631243 kubeadm.go:406] StartCluster complete in 28.974184879s
	I0115 10:52:09.990594 1631243 settings.go:142] acquiring lock: {Name:mk05555b5306114ae6571475ccb387a5354ea318 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:52:09.991315 1631243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 10:52:09.991719 1631243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/kubeconfig: {Name:mk8fd98ab18475cc98d08290957f6662a0acdd86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:52:09.991925 1631243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 10:52:09.992202 1631243 config.go:182] Loaded profile config "addons-944407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:52:09.992339 1631243 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0115 10:52:09.992422 1631243 addons.go:69] Setting yakd=true in profile "addons-944407"
	I0115 10:52:09.992436 1631243 addons.go:234] Setting addon yakd=true in "addons-944407"
	I0115 10:52:09.992492 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:09.992946 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:09.994466 1631243 addons.go:69] Setting inspektor-gadget=true in profile "addons-944407"
	I0115 10:52:09.994486 1631243 addons.go:234] Setting addon inspektor-gadget=true in "addons-944407"
	I0115 10:52:09.994530 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:09.994983 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.002026 1631243 addons.go:69] Setting metrics-server=true in profile "addons-944407"
	I0115 10:52:10.002071 1631243 addons.go:234] Setting addon metrics-server=true in "addons-944407"
	I0115 10:52:10.002126 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.002309 1631243 addons.go:69] Setting cloud-spanner=true in profile "addons-944407"
	I0115 10:52:10.002321 1631243 addons.go:234] Setting addon cloud-spanner=true in "addons-944407"
	I0115 10:52:10.002348 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.002750 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.003229 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.003815 1631243 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-944407"
	I0115 10:52:10.003846 1631243 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-944407"
	I0115 10:52:10.003903 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.004338 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.010574 1631243 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-944407"
	I0115 10:52:10.010709 1631243 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-944407"
	I0115 10:52:10.010795 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.011322 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.022771 1631243 addons.go:69] Setting default-storageclass=true in profile "addons-944407"
	I0115 10:52:10.022883 1631243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-944407"
	I0115 10:52:10.023304 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.030617 1631243 addons.go:69] Setting registry=true in profile "addons-944407"
	I0115 10:52:10.030708 1631243 addons.go:234] Setting addon registry=true in "addons-944407"
	I0115 10:52:10.030789 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.031292 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.035152 1631243 addons.go:69] Setting gcp-auth=true in profile "addons-944407"
	I0115 10:52:10.035244 1631243 mustload.go:65] Loading cluster: addons-944407
	I0115 10:52:10.035521 1631243 config.go:182] Loaded profile config "addons-944407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 10:52:10.035908 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.050374 1631243 addons.go:69] Setting storage-provisioner=true in profile "addons-944407"
	I0115 10:52:10.050551 1631243 addons.go:234] Setting addon storage-provisioner=true in "addons-944407"
	I0115 10:52:10.050632 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.051136 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.055349 1631243 addons.go:69] Setting ingress=true in profile "addons-944407"
	I0115 10:52:10.055383 1631243 addons.go:234] Setting addon ingress=true in "addons-944407"
	I0115 10:52:10.055444 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.055897 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.066391 1631243 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-944407"
	I0115 10:52:10.066482 1631243 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-944407"
	I0115 10:52:10.066861 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.081389 1631243 addons.go:69] Setting ingress-dns=true in profile "addons-944407"
	I0115 10:52:10.081424 1631243 addons.go:234] Setting addon ingress-dns=true in "addons-944407"
	I0115 10:52:10.081485 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.081937 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.084273 1631243 addons.go:69] Setting volumesnapshots=true in profile "addons-944407"
	I0115 10:52:10.084352 1631243 addons.go:234] Setting addon volumesnapshots=true in "addons-944407"
	I0115 10:52:10.084431 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.084898 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.219714 1631243 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0115 10:52:10.225941 1631243 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0115 10:52:10.226016 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0115 10:52:10.226134 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.237964 1631243 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0115 10:52:10.239776 1631243 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0115 10:52:10.239800 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0115 10:52:10.239867 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.251922 1631243 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0115 10:52:10.219390 1631243 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 10:52:10.251503 1631243 addons.go:234] Setting addon default-storageclass=true in "addons-944407"
	I0115 10:52:10.256330 1631243 out.go:177]   - Using image docker.io/registry:2.8.3
	I0115 10:52:10.254112 1631243 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0115 10:52:10.254180 1631243 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0115 10:52:10.254185 1631243 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0115 10:52:10.254190 1631243 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0115 10:52:10.254956 1631243 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0115 10:52:10.255088 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.260151 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0115 10:52:10.260225 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.260456 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.273841 1631243 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 10:52:10.273857 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0115 10:52:10.273918 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.295473 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.302510 1631243 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0115 10:52:10.304697 1631243 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0115 10:52:10.302632 1631243 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 10:52:10.306605 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0115 10:52:10.308122 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.308336 1631243 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 10:52:10.308349 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 10:52:10.308403 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.329522 1631243 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0115 10:52:10.332562 1631243 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0115 10:52:10.334747 1631243 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0115 10:52:10.336760 1631243 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0115 10:52:10.338547 1631243 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0115 10:52:10.340825 1631243 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0115 10:52:10.340843 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0115 10:52:10.340912 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.348543 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.350160 1631243 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-944407"
	I0115 10:52:10.350206 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:10.352446 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:10.336563 1631243 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0115 10:52:10.369282 1631243 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0115 10:52:10.377811 1631243 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0115 10:52:10.377844 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0115 10:52:10.377924 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.374035 1631243 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0115 10:52:10.374138 1631243 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0115 10:52:10.380629 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0115 10:52:10.380714 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.404161 1631243 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 10:52:10.406301 1631243 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 10:52:10.414597 1631243 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 10:52:10.414621 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0115 10:52:10.414684 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.418754 1631243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 10:52:10.429731 1631243 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:52:10.429753 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 10:52:10.429815 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.435534 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.532748 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.552689 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.555559 1631243 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 10:52:10.555579 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 10:52:10.555697 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.578197 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.595394 1631243 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-944407" context rescaled to 1 replicas
	I0115 10:52:10.595432 1631243 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 10:52:10.600785 1631243 out.go:177] * Verifying Kubernetes components...
	I0115 10:52:10.602551 1631243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:52:10.630228 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.648128 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.656603 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.656906 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.662422 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.693101 1631243 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0115 10:52:10.690785 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.697960 1631243 out.go:177]   - Using image docker.io/busybox:stable
	I0115 10:52:10.701003 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:10.701684 1631243 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 10:52:10.701699 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0115 10:52:10.701780 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:10.734105 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	W0115 10:52:10.735229 1631243 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0115 10:52:10.735260 1631243 retry.go:31] will retry after 154.634084ms: ssh: handshake failed: EOF
	I0115 10:52:10.888578 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0115 10:52:10.975959 1631243 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0115 10:52:10.976025 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0115 10:52:11.065190 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 10:52:11.068491 1631243 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0115 10:52:11.068518 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0115 10:52:11.121048 1631243 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0115 10:52:11.121088 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0115 10:52:11.152670 1631243 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 10:52:11.152732 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0115 10:52:11.156469 1631243 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0115 10:52:11.156531 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0115 10:52:11.175203 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 10:52:11.190503 1631243 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0115 10:52:11.190576 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0115 10:52:11.200941 1631243 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0115 10:52:11.201008 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0115 10:52:11.219141 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 10:52:11.240194 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 10:52:11.260815 1631243 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0115 10:52:11.260886 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0115 10:52:11.263832 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 10:52:11.266560 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 10:52:11.310856 1631243 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0115 10:52:11.310929 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0115 10:52:11.312691 1631243 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 10:52:11.312744 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 10:52:11.371175 1631243 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0115 10:52:11.371245 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0115 10:52:11.384459 1631243 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0115 10:52:11.384523 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0115 10:52:11.419156 1631243 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0115 10:52:11.419227 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0115 10:52:11.455393 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0115 10:52:11.530993 1631243 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0115 10:52:11.531068 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0115 10:52:11.559648 1631243 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:52:11.559725 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 10:52:11.587107 1631243 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0115 10:52:11.587180 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0115 10:52:11.650649 1631243 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0115 10:52:11.650719 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0115 10:52:11.672954 1631243 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0115 10:52:11.673026 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0115 10:52:11.700850 1631243 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0115 10:52:11.700913 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0115 10:52:11.758878 1631243 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0115 10:52:11.758948 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0115 10:52:11.784119 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 10:52:11.833107 1631243 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0115 10:52:11.833177 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0115 10:52:11.865524 1631243 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0115 10:52:11.865596 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0115 10:52:11.930608 1631243 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 10:52:11.930678 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0115 10:52:11.952310 1631243 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0115 10:52:11.952383 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0115 10:52:12.009829 1631243 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0115 10:52:12.009916 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0115 10:52:12.038613 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0115 10:52:12.062079 1631243 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 10:52:12.062155 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0115 10:52:12.097606 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 10:52:12.104863 1631243 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0115 10:52:12.104935 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0115 10:52:12.171398 1631243 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0115 10:52:12.171476 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0115 10:52:12.194356 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 10:52:12.284414 1631243 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0115 10:52:12.284443 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0115 10:52:12.443389 1631243 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0115 10:52:12.443415 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0115 10:52:12.639217 1631243 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 10:52:12.639243 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0115 10:52:12.782644 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 10:52:12.822317 1631243 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.568065563s)
	I0115 10:52:12.822395 1631243 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0115 10:52:12.822434 1631243 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.219862224s)
	I0115 10:52:12.823299 1631243 node_ready.go:35] waiting up to 6m0s for node "addons-944407" to be "Ready" ...
	I0115 10:52:14.949777 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:15.444664 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.556006666s)
	I0115 10:52:15.513936 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.448707309s)
	I0115 10:52:16.627135 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.386872504s)
	I0115 10:52:16.627229 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.363339447s)
	I0115 10:52:16.627458 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.360837333s)
	I0115 10:52:16.627635 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.172174462s)
	I0115 10:52:16.627682 1631243 addons.go:470] Verifying addon registry=true in "addons-944407"
	I0115 10:52:16.629499 1631243 out.go:177] * Verifying registry addon...
	I0115 10:52:16.627071 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.407847183s)
	I0115 10:52:16.627885 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.843700082s)
	I0115 10:52:16.627937 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.589244239s)
	I0115 10:52:16.628020 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.530334341s)
	I0115 10:52:16.628074 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.433646202s)
	I0115 10:52:16.628195 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.452930855s)
	I0115 10:52:16.630834 1631243 addons.go:470] Verifying addon metrics-server=true in "addons-944407"
	I0115 10:52:16.631715 1631243 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0115 10:52:16.631912 1631243 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 10:52:16.631935 1631243 retry.go:31] will retry after 191.841429ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 10:52:16.633686 1631243 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-944407 service yakd-dashboard -n yakd-dashboard
	
	I0115 10:52:16.632076 1631243 addons.go:470] Verifying addon ingress=true in "addons-944407"
	I0115 10:52:16.637047 1631243 out.go:177] * Verifying ingress addon...
	I0115 10:52:16.639452 1631243 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0115 10:52:16.647857 1631243 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0115 10:52:16.647886 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:16.651340 1631243 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 10:52:16.651370 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0115 10:52:16.658838 1631243 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0115 10:52:16.824937 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 10:52:16.964106 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.181353701s)
	I0115 10:52:16.964223 1631243 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-944407"
	I0115 10:52:16.966224 1631243 out.go:177] * Verifying csi-hostpath-driver addon...
	I0115 10:52:16.969319 1631243 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0115 10:52:16.997639 1631243 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 10:52:16.997707 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:17.151307 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:17.157611 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:17.330144 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:17.479378 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:17.638319 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:17.656224 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:17.993621 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:18.167077 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:18.178394 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:18.272942 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.447919847s)
	I0115 10:52:18.476180 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:18.639151 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:18.650084 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:18.845832 1631243 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0115 10:52:18.845923 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:18.883515 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:18.975645 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:19.100223 1631243 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0115 10:52:19.136349 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:19.146936 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:19.204681 1631243 addons.go:234] Setting addon gcp-auth=true in "addons-944407"
	I0115 10:52:19.204742 1631243 host.go:66] Checking if "addons-944407" exists ...
	I0115 10:52:19.205309 1631243 cli_runner.go:164] Run: docker container inspect addons-944407 --format={{.State.Status}}
	I0115 10:52:19.231979 1631243 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0115 10:52:19.232084 1631243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-944407
	I0115 10:52:19.253333 1631243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34719 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/addons-944407/id_rsa Username:docker}
	I0115 10:52:19.362234 1631243 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 10:52:19.364282 1631243 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0115 10:52:19.366146 1631243 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0115 10:52:19.366168 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0115 10:52:19.438067 1631243 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0115 10:52:19.438134 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0115 10:52:19.474361 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:19.501890 1631243 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 10:52:19.501915 1631243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0115 10:52:19.563434 1631243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 10:52:19.657741 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:19.658697 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:19.831142 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:20.027623 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:20.137229 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:20.144663 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:20.475534 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:20.652393 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:20.662090 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:20.773520 1631243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.210036104s)
	I0115 10:52:20.777582 1631243 addons.go:470] Verifying addon gcp-auth=true in "addons-944407"
	I0115 10:52:20.780013 1631243 out.go:177] * Verifying gcp-auth addon...
	I0115 10:52:20.783030 1631243 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0115 10:52:20.803860 1631243 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0115 10:52:20.803892 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:20.975216 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:21.141335 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:21.144515 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:21.287483 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:21.474823 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:21.637336 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:21.643808 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:21.787711 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:21.976643 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:22.136098 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:22.144717 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:22.287964 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:22.327508 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:22.475703 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:22.636755 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:22.644825 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:22.786540 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:22.974699 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:23.136390 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:23.144018 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:23.286791 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:23.474175 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:23.636040 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:23.643774 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:23.786467 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:23.974301 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:24.137470 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:24.144072 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:24.288067 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:24.475264 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:24.636587 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:24.643703 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:24.788706 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:24.827478 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:24.973974 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:25.136663 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:25.143940 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:25.286547 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:25.486250 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:25.635914 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:25.644888 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:25.786679 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:25.974514 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:26.136758 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:26.144916 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:26.286563 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:26.480497 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:26.636591 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:26.644029 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:26.787809 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:26.974617 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:27.136159 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:27.144402 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:27.287036 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:27.327237 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:27.473708 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:27.636098 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:27.643820 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:27.787368 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:27.976550 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:28.136154 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:28.144181 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:28.286820 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:28.474682 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:28.636636 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:28.644059 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:28.786697 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:28.973649 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:29.136485 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:29.144394 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:29.287134 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:29.474173 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:29.640550 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:29.643960 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:29.786510 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:29.826971 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:29.974087 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:30.136747 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:30.143769 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:30.286481 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:30.474936 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:30.636408 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:30.643450 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:30.787399 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:30.974526 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:31.136492 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:31.144141 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:31.286923 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:31.474611 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:31.636293 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:31.644205 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:31.787755 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:31.827164 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:31.974058 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:32.135650 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:32.144486 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:32.287679 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:32.474358 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:32.635968 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:32.643667 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:32.787350 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:32.974840 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:33.136506 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:33.144417 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:33.287073 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:33.474248 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:33.636231 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:33.643987 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:33.786849 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:33.827691 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:33.974207 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:34.135699 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:34.144387 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:34.287153 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:34.474024 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:34.636216 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:34.644499 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:34.787500 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:34.974375 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:35.136445 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:35.144173 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:35.286989 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:35.474340 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:35.636595 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:35.643829 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:35.787166 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:35.973770 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:36.136210 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:36.144106 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:36.287408 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:36.327313 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:36.474165 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:36.636697 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:36.643725 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:36.787307 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:36.973881 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:37.136446 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:37.144129 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:37.287006 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:37.474034 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:37.636565 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:37.644770 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:37.786550 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:37.974122 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:38.136913 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:38.144065 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:38.286479 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:38.474641 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:38.636110 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:38.644127 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:38.787088 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:38.827394 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:38.974039 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:39.136261 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:39.144165 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:39.287033 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:39.474996 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:39.641691 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:39.644762 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:39.787405 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:39.976370 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:40.136503 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:40.144280 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:40.286750 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:40.475347 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:40.636642 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:40.644532 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:40.787294 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:40.974530 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:41.136482 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:41.144230 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:41.295932 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:41.326737 1631243 node_ready.go:58] node "addons-944407" has status "Ready":"False"
	I0115 10:52:41.473671 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:41.636270 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:41.643982 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:41.786857 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:41.975186 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:42.138384 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:42.146502 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:42.288642 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:42.474692 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:42.636145 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:42.643826 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:42.787609 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:42.974490 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:43.163990 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:43.170015 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:43.294120 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:43.330241 1631243 node_ready.go:49] node "addons-944407" has status "Ready":"True"
	I0115 10:52:43.330268 1631243 node_ready.go:38] duration metric: took 30.506911087s waiting for node "addons-944407" to be "Ready" ...
	I0115 10:52:43.330299 1631243 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:52:43.343033 1631243 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mcrmc" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:43.558928 1631243 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 10:52:43.558954 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:43.648139 1631243 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 10:52:43.648176 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:43.667914 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:43.791872 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:43.997914 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:44.137100 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:44.143819 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:44.286968 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:44.476699 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:44.651585 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:44.652506 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:44.787160 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:44.976639 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:45.137528 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:45.149394 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:45.287587 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:45.350945 1631243 pod_ready.go:102] pod "coredns-5dd5756b68-mcrmc" in "kube-system" namespace has status "Ready":"False"
	I0115 10:52:45.476434 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:45.640283 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:45.653003 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:45.811750 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:45.870783 1631243 pod_ready.go:92] pod "coredns-5dd5756b68-mcrmc" in "kube-system" namespace has status "Ready":"True"
	I0115 10:52:45.870810 1631243 pod_ready.go:81] duration metric: took 2.527743951s waiting for pod "coredns-5dd5756b68-mcrmc" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.870834 1631243 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-944407" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.879563 1631243 pod_ready.go:92] pod "etcd-addons-944407" in "kube-system" namespace has status "Ready":"True"
	I0115 10:52:45.879590 1631243 pod_ready.go:81] duration metric: took 8.746272ms waiting for pod "etcd-addons-944407" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.879604 1631243 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-944407" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.888572 1631243 pod_ready.go:92] pod "kube-apiserver-addons-944407" in "kube-system" namespace has status "Ready":"True"
	I0115 10:52:45.888598 1631243 pod_ready.go:81] duration metric: took 8.98497ms waiting for pod "kube-apiserver-addons-944407" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.888611 1631243 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-944407" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.897231 1631243 pod_ready.go:92] pod "kube-controller-manager-addons-944407" in "kube-system" namespace has status "Ready":"True"
	I0115 10:52:45.897259 1631243 pod_ready.go:81] duration metric: took 8.635424ms waiting for pod "kube-controller-manager-addons-944407" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.897273 1631243 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7tlcp" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.908751 1631243 pod_ready.go:92] pod "kube-proxy-7tlcp" in "kube-system" namespace has status "Ready":"True"
	I0115 10:52:45.908778 1631243 pod_ready.go:81] duration metric: took 11.496447ms waiting for pod "kube-proxy-7tlcp" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.908790 1631243 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-944407" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:45.976666 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:46.137941 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:46.145089 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:46.259169 1631243 pod_ready.go:92] pod "kube-scheduler-addons-944407" in "kube-system" namespace has status "Ready":"True"
	I0115 10:52:46.259244 1631243 pod_ready.go:81] duration metric: took 350.444874ms waiting for pod "kube-scheduler-addons-944407" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:46.259270 1631243 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace to be "Ready" ...
	I0115 10:52:46.287967 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:46.477782 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:46.636833 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:46.645014 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:46.788293 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:46.977427 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:47.137468 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:47.144187 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:47.290464 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:47.475979 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:47.642572 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:47.646578 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:47.786975 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:47.975880 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:48.146726 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:48.151259 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:48.267380 1631243 pod_ready.go:102] pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:52:48.289854 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:48.475453 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:48.637459 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:48.643689 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:48.788981 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:48.975518 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:49.153954 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:49.154686 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:49.286839 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:49.475598 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:49.649033 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:49.650179 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:49.786690 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:49.976329 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:50.137318 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:50.143646 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:50.286979 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:50.475814 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:50.636749 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:50.644101 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:50.779706 1631243 pod_ready.go:102] pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:52:50.788012 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:50.975826 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:51.137150 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:51.149611 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:51.287828 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:51.475269 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:51.648534 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:51.650355 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:51.787821 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:51.975489 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:52.182501 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:52.185334 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:52.288366 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:52.494832 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:52.637069 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:52.645671 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:52.790762 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:52.977043 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:53.137566 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:53.144746 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:53.267653 1631243 pod_ready.go:102] pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:52:53.287395 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:53.480294 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:53.642097 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:53.652923 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:53.788353 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:54.006321 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:54.146524 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:54.162448 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:54.299144 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:54.478605 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:54.644120 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:54.646001 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:54.800157 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:54.982576 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:55.139392 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:55.147090 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:55.269034 1631243 pod_ready.go:102] pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:52:55.288793 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:55.475789 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:55.643306 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:55.647523 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:55.789367 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:55.975707 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:56.139514 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:56.144968 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:56.287343 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:56.476836 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:56.636682 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:56.644147 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:56.790068 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:56.976165 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:57.137463 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:57.143946 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:57.286920 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:57.475518 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:57.636690 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:57.644511 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:57.766387 1631243 pod_ready.go:102] pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:52:57.788593 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:57.980543 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:58.138603 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:58.145273 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:58.295102 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:58.475682 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:58.637760 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:58.644458 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:58.790255 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:58.977076 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:59.139679 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:59.143880 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:59.287096 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:59.475500 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:52:59.642410 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:52:59.644890 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:52:59.786759 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:52:59.977239 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:00.172214 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:00.179038 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:00.290735 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:00.301331 1631243 pod_ready.go:102] pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:53:00.475118 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:00.639132 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:00.644877 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:00.786988 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:00.983822 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:01.137712 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:01.144285 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:01.291540 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:01.474934 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:01.636615 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:01.643591 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:01.786971 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:01.975159 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:02.137114 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:02.144471 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:02.287157 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:02.475097 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:02.637844 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:02.644409 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:02.779869 1631243 pod_ready.go:102] pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace has status "Ready":"False"
	I0115 10:53:02.792624 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:02.976603 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:03.138597 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:03.147426 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:03.288356 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:03.475367 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:03.637256 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:03.644640 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:03.765641 1631243 pod_ready.go:92] pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace has status "Ready":"True"
	I0115 10:53:03.765676 1631243 pod_ready.go:81] duration metric: took 17.50638578s waiting for pod "metrics-server-7c66d45ddc-6vpnh" in "kube-system" namespace to be "Ready" ...
	I0115 10:53:03.765690 1631243 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wlxzq" in "kube-system" namespace to be "Ready" ...
	I0115 10:53:03.786662 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:03.974615 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:04.137121 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:04.143630 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:04.287099 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:04.475679 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:04.637197 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:04.643646 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:04.787149 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:04.975340 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:05.137140 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:05.144440 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:05.286749 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:05.475862 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:05.637610 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:05.648550 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:05.772690 1631243 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wlxzq" in "kube-system" namespace has status "Ready":"False"
	I0115 10:53:05.787812 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:05.980035 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:06.138077 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:06.145796 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:06.292216 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:06.475414 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:06.637364 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:06.646093 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:06.791217 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:06.976217 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:07.145519 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:07.154633 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:07.290155 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:07.475831 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:07.639056 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:07.646260 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:07.774000 1631243 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-wlxzq" in "kube-system" namespace has status "Ready":"False"
	I0115 10:53:07.787737 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:07.975690 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:08.138591 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:08.146772 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:08.287532 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:08.475816 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:08.637876 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:08.649166 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:08.787616 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:08.975358 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:09.137822 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:09.144379 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:09.273203 1631243 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-wlxzq" in "kube-system" namespace has status "Ready":"True"
	I0115 10:53:09.273231 1631243 pod_ready.go:81] duration metric: took 5.507531915s waiting for pod "nvidia-device-plugin-daemonset-wlxzq" in "kube-system" namespace to be "Ready" ...
	I0115 10:53:09.273288 1631243 pod_ready.go:38] duration metric: took 25.942975013s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 10:53:09.273311 1631243 api_server.go:52] waiting for apiserver process to appear ...
	I0115 10:53:09.273385 1631243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 10:53:09.288434 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:09.291000 1631243 api_server.go:72] duration metric: took 58.695541696s to wait for apiserver process to appear ...
	I0115 10:53:09.291022 1631243 api_server.go:88] waiting for apiserver healthz status ...
	I0115 10:53:09.291042 1631243 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0115 10:53:09.302572 1631243 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0115 10:53:09.303893 1631243 api_server.go:141] control plane version: v1.28.4
	I0115 10:53:09.303922 1631243 api_server.go:131] duration metric: took 12.89244ms to wait for apiserver health ...
	I0115 10:53:09.303931 1631243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 10:53:09.313288 1631243 system_pods.go:59] 18 kube-system pods found
	I0115 10:53:09.313329 1631243 system_pods.go:61] "coredns-5dd5756b68-mcrmc" [2f6d1218-2ec0-4fd6-8f78-a02dc799dc1e] Running
	I0115 10:53:09.313336 1631243 system_pods.go:61] "csi-hostpath-attacher-0" [2bd6e723-064e-4ee9-b9a6-6aec5dac264e] Running
	I0115 10:53:09.313368 1631243 system_pods.go:61] "csi-hostpath-resizer-0" [d2c4b417-7bff-4a9e-ba16-21c0568d5fff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 10:53:09.313384 1631243 system_pods.go:61] "csi-hostpathplugin-szgwl" [54df1146-8006-4894-a7b3-29a181c6e8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 10:53:09.313391 1631243 system_pods.go:61] "etcd-addons-944407" [84098342-f40e-4fc3-839f-e5aac88ed383] Running
	I0115 10:53:09.313399 1631243 system_pods.go:61] "kindnet-sdlzq" [457b4d2c-ddc0-4ecc-a16b-7b7261d18bf3] Running
	I0115 10:53:09.313405 1631243 system_pods.go:61] "kube-apiserver-addons-944407" [9b9bfcca-f52e-4508-8f96-9a5835b99e80] Running
	I0115 10:53:09.313410 1631243 system_pods.go:61] "kube-controller-manager-addons-944407" [bd0abf96-f99b-4613-9428-71d82e828d56] Running
	I0115 10:53:09.313420 1631243 system_pods.go:61] "kube-ingress-dns-minikube" [6a29df1d-e7cf-4984-80f5-a26c40fc0a4a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0115 10:53:09.313425 1631243 system_pods.go:61] "kube-proxy-7tlcp" [aca34eb0-2450-4779-8bd0-9cac62fd8c61] Running
	I0115 10:53:09.313446 1631243 system_pods.go:61] "kube-scheduler-addons-944407" [cedaccfa-dc4f-4086-a45b-2bf85dd52a79] Running
	I0115 10:53:09.313460 1631243 system_pods.go:61] "metrics-server-7c66d45ddc-6vpnh" [ccf80058-1c18-44ab-b238-6546e3a32eca] Running
	I0115 10:53:09.313466 1631243 system_pods.go:61] "nvidia-device-plugin-daemonset-wlxzq" [13278ec8-c26a-491b-a4a8-b0324424d3a7] Running
	I0115 10:53:09.313477 1631243 system_pods.go:61] "registry-bnfg8" [b4acd00d-da91-4eb6-bd16-c83cf4d53f2c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0115 10:53:09.313484 1631243 system_pods.go:61] "registry-proxy-nzd7c" [763da9f9-9a33-4227-acb2-c43f50b03261] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0115 10:53:09.313493 1631243 system_pods.go:61] "snapshot-controller-58dbcc7b99-np6fz" [f90749a1-966d-48c5-bb75-6e012022f616] Running
	I0115 10:53:09.313505 1631243 system_pods.go:61] "snapshot-controller-58dbcc7b99-sd7k9" [f934a6cd-26f1-4dac-9549-cefd00b6f9ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0115 10:53:09.313528 1631243 system_pods.go:61] "storage-provisioner" [4dec3d42-cd44-4ace-89db-2c70a6e63e3d] Running
	I0115 10:53:09.313539 1631243 system_pods.go:74] duration metric: took 9.602357ms to wait for pod list to return data ...
	I0115 10:53:09.313549 1631243 default_sa.go:34] waiting for default service account to be created ...
	I0115 10:53:09.316265 1631243 default_sa.go:45] found service account: "default"
	I0115 10:53:09.316289 1631243 default_sa.go:55] duration metric: took 2.729703ms for default service account to be created ...
	I0115 10:53:09.316298 1631243 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 10:53:09.337244 1631243 system_pods.go:86] 18 kube-system pods found
	I0115 10:53:09.337279 1631243 system_pods.go:89] "coredns-5dd5756b68-mcrmc" [2f6d1218-2ec0-4fd6-8f78-a02dc799dc1e] Running
	I0115 10:53:09.337286 1631243 system_pods.go:89] "csi-hostpath-attacher-0" [2bd6e723-064e-4ee9-b9a6-6aec5dac264e] Running
	I0115 10:53:09.337294 1631243 system_pods.go:89] "csi-hostpath-resizer-0" [d2c4b417-7bff-4a9e-ba16-21c0568d5fff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 10:53:09.337319 1631243 system_pods.go:89] "csi-hostpathplugin-szgwl" [54df1146-8006-4894-a7b3-29a181c6e8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 10:53:09.337333 1631243 system_pods.go:89] "etcd-addons-944407" [84098342-f40e-4fc3-839f-e5aac88ed383] Running
	I0115 10:53:09.337340 1631243 system_pods.go:89] "kindnet-sdlzq" [457b4d2c-ddc0-4ecc-a16b-7b7261d18bf3] Running
	I0115 10:53:09.337345 1631243 system_pods.go:89] "kube-apiserver-addons-944407" [9b9bfcca-f52e-4508-8f96-9a5835b99e80] Running
	I0115 10:53:09.337370 1631243 system_pods.go:89] "kube-controller-manager-addons-944407" [bd0abf96-f99b-4613-9428-71d82e828d56] Running
	I0115 10:53:09.337384 1631243 system_pods.go:89] "kube-ingress-dns-minikube" [6a29df1d-e7cf-4984-80f5-a26c40fc0a4a] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0115 10:53:09.337390 1631243 system_pods.go:89] "kube-proxy-7tlcp" [aca34eb0-2450-4779-8bd0-9cac62fd8c61] Running
	I0115 10:53:09.337400 1631243 system_pods.go:89] "kube-scheduler-addons-944407" [cedaccfa-dc4f-4086-a45b-2bf85dd52a79] Running
	I0115 10:53:09.337405 1631243 system_pods.go:89] "metrics-server-7c66d45ddc-6vpnh" [ccf80058-1c18-44ab-b238-6546e3a32eca] Running
	I0115 10:53:09.337410 1631243 system_pods.go:89] "nvidia-device-plugin-daemonset-wlxzq" [13278ec8-c26a-491b-a4a8-b0324424d3a7] Running
	I0115 10:53:09.337420 1631243 system_pods.go:89] "registry-bnfg8" [b4acd00d-da91-4eb6-bd16-c83cf4d53f2c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0115 10:53:09.337430 1631243 system_pods.go:89] "registry-proxy-nzd7c" [763da9f9-9a33-4227-acb2-c43f50b03261] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0115 10:53:09.337447 1631243 system_pods.go:89] "snapshot-controller-58dbcc7b99-np6fz" [f90749a1-966d-48c5-bb75-6e012022f616] Running
	I0115 10:53:09.337462 1631243 system_pods.go:89] "snapshot-controller-58dbcc7b99-sd7k9" [f934a6cd-26f1-4dac-9549-cefd00b6f9ec] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0115 10:53:09.337477 1631243 system_pods.go:89] "storage-provisioner" [4dec3d42-cd44-4ace-89db-2c70a6e63e3d] Running
	I0115 10:53:09.337491 1631243 system_pods.go:126] duration metric: took 21.18672ms to wait for k8s-apps to be running ...
	I0115 10:53:09.337499 1631243 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 10:53:09.337574 1631243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 10:53:09.368728 1631243 system_svc.go:56] duration metric: took 31.22027ms WaitForService to wait for kubelet.
	I0115 10:53:09.368752 1631243 kubeadm.go:581] duration metric: took 58.773298516s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 10:53:09.368771 1631243 node_conditions.go:102] verifying NodePressure condition ...
	I0115 10:53:09.376637 1631243 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0115 10:53:09.376672 1631243 node_conditions.go:123] node cpu capacity is 2
	I0115 10:53:09.376685 1631243 node_conditions.go:105] duration metric: took 7.908903ms to run NodePressure ...
	I0115 10:53:09.376696 1631243 start.go:228] waiting for startup goroutines ...
	I0115 10:53:09.475284 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:09.647546 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:09.648263 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:09.787507 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:09.976757 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:10.137021 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:10.144312 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:10.287342 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:10.476360 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:10.637393 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:10.644529 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:10.787487 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:10.975671 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:11.137081 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:11.144816 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:11.287794 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:11.476542 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:11.637774 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:11.652186 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:11.786600 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:11.983108 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:12.137210 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:12.148917 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:12.287158 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:12.476976 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:12.638318 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:12.649317 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:12.787798 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:12.977455 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:13.138625 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:13.144207 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:13.289116 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:13.475546 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:13.638909 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:13.644443 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:13.787482 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:13.977229 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:14.137110 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:14.144684 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:14.287205 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:14.483878 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:14.637062 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:14.645553 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:14.787842 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:14.976222 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:15.149560 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:15.155880 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:15.288337 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:15.474985 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:15.636444 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 10:53:15.643583 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:15.787281 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:15.974766 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:16.144893 1631243 kapi.go:107] duration metric: took 59.513169376s to wait for kubernetes.io/minikube-addons=registry ...
	I0115 10:53:16.149027 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:16.287096 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:16.478548 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:16.645945 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:16.788235 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:16.976586 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:17.147640 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:17.288837 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:17.477390 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:17.645987 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:17.787328 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:17.975725 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:18.145143 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:18.287144 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:18.475905 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:18.647435 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:18.787375 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:18.976963 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:19.148131 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:19.287586 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:19.475636 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:19.643615 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:19.787713 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:19.975029 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:20.145030 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:20.287836 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:20.476288 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:20.644554 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:20.787891 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:20.976321 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:21.148414 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:21.287333 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:21.475786 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:21.651110 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:21.786977 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:21.975255 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:22.144362 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:22.287364 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:22.475785 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:22.643956 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:22.787756 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:22.976027 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:23.144828 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:23.287986 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:23.477203 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:23.644759 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:23.804591 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:23.975761 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:24.149881 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:24.287009 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:24.483476 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:24.644589 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:24.787832 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:24.975942 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:25.144797 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:25.287883 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:25.475992 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:25.644902 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:25.789894 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:25.977078 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:26.148712 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:26.287485 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:26.477216 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:26.644390 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:26.800867 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:26.975815 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:27.147084 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:27.286803 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:27.475318 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:27.644470 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:27.787529 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:27.976143 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:28.145603 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:28.288914 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:28.476120 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:28.643990 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:28.787664 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:28.975836 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:29.161991 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:29.292565 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:29.475707 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:29.665921 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:29.787177 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:29.976058 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:30.145741 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:30.287551 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:30.475043 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:30.659341 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:30.787653 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:30.975415 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:31.144211 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:31.287609 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:31.475490 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:31.644162 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:31.788903 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:31.976046 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:32.149880 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:32.286739 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:32.476518 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:32.644807 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:32.788530 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:32.975799 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:33.144995 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:33.286752 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:33.476159 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:33.644837 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:33.799199 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:33.979321 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:34.145445 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:34.287876 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:34.477428 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:34.646500 1631243 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 10:53:34.787780 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:34.977598 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:35.144534 1631243 kapi.go:107] duration metric: took 1m18.505080596s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0115 10:53:35.357754 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:35.536187 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:35.787950 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:35.977843 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:36.286537 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:36.475183 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:36.787677 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:36.975288 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:37.287122 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:37.476310 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:37.788423 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:37.975869 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:38.289307 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:38.483513 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:38.788685 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:38.975626 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:39.287370 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:39.481230 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:39.787342 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:39.976306 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:40.286865 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:40.478976 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:40.786858 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:40.978263 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:41.287192 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:41.476081 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:41.787322 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:41.975853 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:42.288334 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:42.476102 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:42.800189 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:42.975051 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:43.287303 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:43.481570 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:43.787923 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:43.976067 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:44.287221 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:44.475446 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 10:53:44.787477 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:44.975953 1631243 kapi.go:107] duration metric: took 1m28.006633295s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0115 10:53:45.290253 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:45.786934 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:46.287013 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:46.786603 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:47.287214 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:47.786588 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:48.287051 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:48.787133 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:49.287841 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:49.787363 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:50.287288 1631243 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 10:53:50.787348 1631243 kapi.go:107] duration metric: took 1m30.00431236s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0115 10:53:50.789414 1631243 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-944407 cluster.
	I0115 10:53:50.791498 1631243 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0115 10:53:50.793528 1631243 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0115 10:53:50.795625 1631243 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, metrics-server, nvidia-device-plugin, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0115 10:53:50.797345 1631243 addons.go:505] enable addons completed in 1m40.805000281s: enabled=[cloud-spanner ingress-dns storage-provisioner metrics-server nvidia-device-plugin inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0115 10:53:50.797393 1631243 start.go:233] waiting for cluster config update ...
	I0115 10:53:50.797413 1631243 start.go:242] writing updated cluster config ...
	I0115 10:53:50.797709 1631243 ssh_runner.go:195] Run: rm -f paused
	I0115 10:53:51.174672 1631243 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 10:53:51.176743 1631243 out.go:177] * Done! kubectl is now configured to use "addons-944407" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 15 10:57:01 addons-944407 crio[886]: time="2024-01-15 10:57:01.282497327Z" level=info msg="Created container 3f7a1d85223ba152b208facff7584ff9cb160ab25371afd609ad67d9a46a5df3: default/hello-world-app-5d77478584-f62vd/hello-world-app" id=b3232230-1120-4463-8927-9f4b4e3843b2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 15 10:57:01 addons-944407 crio[886]: time="2024-01-15 10:57:01.283390170Z" level=info msg="Starting container: 3f7a1d85223ba152b208facff7584ff9cb160ab25371afd609ad67d9a46a5df3" id=4bc1cadb-f66b-42b5-951c-99e3f8cd6954 name=/runtime.v1.RuntimeService/StartContainer
	Jan 15 10:57:01 addons-944407 conmon[7512]: conmon 3f7a1d85223ba152b208 <ninfo>: container 7523 exited with status 1
	Jan 15 10:57:01 addons-944407 crio[886]: time="2024-01-15 10:57:01.295754541Z" level=info msg="Started container" PID=7523 containerID=3f7a1d85223ba152b208facff7584ff9cb160ab25371afd609ad67d9a46a5df3 description=default/hello-world-app-5d77478584-f62vd/hello-world-app id=4bc1cadb-f66b-42b5-951c-99e3f8cd6954 name=/runtime.v1.RuntimeService/StartContainer sandboxID=25b4471cb0d87ae2c673ab539441bbe44395fb7fd7436ef62474d45542590283
	Jan 15 10:57:01 addons-944407 crio[886]: time="2024-01-15 10:57:01.479791043Z" level=info msg="Removing container: ac4eae806d89532ee6bd96b51f0a2019320c732442fd9f2b9857309440814ad0" id=170b95b6-20a7-4cba-bf72-5d5b86b582b2 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 10:57:01 addons-944407 crio[886]: time="2024-01-15 10:57:01.497575395Z" level=info msg="Removed container ac4eae806d89532ee6bd96b51f0a2019320c732442fd9f2b9857309440814ad0: default/hello-world-app-5d77478584-f62vd/hello-world-app" id=170b95b6-20a7-4cba-bf72-5d5b86b582b2 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 10:57:02 addons-944407 crio[886]: time="2024-01-15 10:57:02.158437076Z" level=info msg="Stopping pod sandbox: ed8cb5ade06bef43971e4d95e91ed9f44dd8e3c5bfd472f7db9659aff5c18894" id=d2f4a458-0381-4a6c-9fa1-70fdc97f3a84 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 15 10:57:02 addons-944407 crio[886]: time="2024-01-15 10:57:02.164806358Z" level=info msg="Stopped pod sandbox: ed8cb5ade06bef43971e4d95e91ed9f44dd8e3c5bfd472f7db9659aff5c18894" id=d2f4a458-0381-4a6c-9fa1-70fdc97f3a84 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 15 10:57:02 addons-944407 crio[886]: time="2024-01-15 10:57:02.483747070Z" level=info msg="Removing container: 6be972dbb541c2ca3c79a49531722a6e08c302456f3e569a32b26e0e28409f7f" id=a033a532-207d-4b0d-8b61-a99fc42941ac name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 10:57:02 addons-944407 crio[886]: time="2024-01-15 10:57:02.509933425Z" level=info msg="Removed container 6be972dbb541c2ca3c79a49531722a6e08c302456f3e569a32b26e0e28409f7f: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=a033a532-207d-4b0d-8b61-a99fc42941ac name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 10:57:04 addons-944407 crio[886]: time="2024-01-15 10:57:04.237996115Z" level=info msg="Stopping container: 817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa (timeout: 2s)" id=4759eae9-75ae-4562-a21f-1f47ffc541bb name=/runtime.v1.RuntimeService/StopContainer
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.245747533Z" level=warning msg="Stopping container 817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=4759eae9-75ae-4562-a21f-1f47ffc541bb name=/runtime.v1.RuntimeService/StopContainer
	Jan 15 10:57:06 addons-944407 conmon[4447]: conmon 817525992a31e7a38cca <ninfo>: container 4459 exited with status 137
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.387596035Z" level=info msg="Stopped container 817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa: ingress-nginx/ingress-nginx-controller-69cff4fd79-hqbnm/controller" id=4759eae9-75ae-4562-a21f-1f47ffc541bb name=/runtime.v1.RuntimeService/StopContainer
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.388076351Z" level=info msg="Stopping pod sandbox: b54caaf07d4d23a62c13b279309fb8b7cd872dd9d1875062a39e68ea4eb72bb2" id=d1d188ad-0ad6-4540-91c0-72dab5f02207 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.391498040Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-WFDDWU6K2WTPP4WN - [0:0]\n:KUBE-HP-EX2KHAQFQWAW4JLM - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-WFDDWU6K2WTPP4WN\n-X KUBE-HP-EX2KHAQFQWAW4JLM\nCOMMIT\n"
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.400860461Z" level=info msg="Closing host port tcp:80"
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.400919167Z" level=info msg="Closing host port tcp:443"
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.403004456Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.403035421Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.403225849Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-hqbnm Namespace:ingress-nginx ID:b54caaf07d4d23a62c13b279309fb8b7cd872dd9d1875062a39e68ea4eb72bb2 UID:4b450b63-d351-4465-a521-a552650162a9 NetNS:/var/run/netns/51a70f8e-850a-4dd7-b18a-76884f9fbfee Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.403367745Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-hqbnm from CNI network \"kindnet\" (type=ptp)"
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.424509502Z" level=info msg="Stopped pod sandbox: b54caaf07d4d23a62c13b279309fb8b7cd872dd9d1875062a39e68ea4eb72bb2" id=d1d188ad-0ad6-4540-91c0-72dab5f02207 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.493463320Z" level=info msg="Removing container: 817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa" id=ae3085f0-f005-41ac-a88f-57a1c77a3ef6 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 10:57:06 addons-944407 crio[886]: time="2024-01-15 10:57:06.512433046Z" level=info msg="Removed container 817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa: ingress-nginx/ingress-nginx-controller-69cff4fd79-hqbnm/controller" id=ae3085f0-f005-41ac-a88f-57a1c77a3ef6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3f7a1d85223ba       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             10 seconds ago       Exited              hello-world-app           2                   25b4471cb0d87       hello-world-app-5d77478584-f62vd
	f80421ad0e64b       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                        About a minute ago   Running             headlamp                  0                   161bb6381a6c5       headlamp-7ddfbb94ff-72t6n
	0da7c3a2c0df2       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                              2 minutes ago        Running             nginx                     0                   5000386776fc4       nginx
	8a3786415e0a5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago        Running             gcp-auth                  0                   7c4b4df8b96d5       gcp-auth-d4c87556c-k9c6b
	0cfc3403b8cc5       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago        Running             yakd                      0                   95888501f38a1       yakd-dashboard-9947fc6bf-8bsq2
	a13cd88dbba5a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   4 minutes ago        Exited              patch                     0                   a247b1b69f177       ingress-nginx-admission-patch-99gvm
	0a2cf31791b55       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             4 minutes ago        Running             local-path-provisioner    0                   371be2b2a29b0       local-path-provisioner-78b46b4d5c-bm2qj
	db679508dca46       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   4 minutes ago        Exited              create                    0                   0789b857ff3df       ingress-nginx-admission-create-mmtxr
	1cffd7e59c1f9       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago        Running             coredns                   0                   c90c5fb05bc10       coredns-5dd5756b68-mcrmc
	c98ef84383e7e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago        Running             storage-provisioner       0                   37c574862d51e       storage-provisioner
	571fdeed77892       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago        Running             kindnet-cni               0                   5b201e58c3529       kindnet-sdlzq
	e0f637bdadd6f       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago        Running             kube-proxy                0                   1df45243b421b       kube-proxy-7tlcp
	c8206d4ed0945       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             5 minutes ago        Running             kube-controller-manager   0                   119e8fe8b4880       kube-controller-manager-addons-944407
	004afef605796       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             5 minutes ago        Running             kube-apiserver            0                   c2fb03292d011       kube-apiserver-addons-944407
	2e22fc0f3b152       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             5 minutes ago        Running             kube-scheduler            0                   40cfffca8064e       kube-scheduler-addons-944407
	2dbdfdb0b50a2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago        Running             etcd                      0                   f216aa602fbef       etcd-addons-944407
	
	
	==> coredns [1cffd7e59c1f9bb9ee0fb63e1c34df870f2abb1935f63033d354c00dd8621120] <==
	[INFO] 10.244.0.19:36354 - 27847 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001639359s
	[INFO] 10.244.0.19:36354 - 639 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001586947s
	[INFO] 10.244.0.19:36354 - 48399 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104736s
	[INFO] 10.244.0.19:44734 - 8881 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087079s
	[INFO] 10.244.0.19:44734 - 4501 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001530332s
	[INFO] 10.244.0.19:44734 - 15269 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000948841s
	[INFO] 10.244.0.19:44734 - 46118 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051281s
	[INFO] 10.244.0.19:59880 - 28205 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000117388s
	[INFO] 10.244.0.19:56592 - 63284 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037594s
	[INFO] 10.244.0.19:59880 - 6965 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000115312s
	[INFO] 10.244.0.19:56592 - 49574 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104506s
	[INFO] 10.244.0.19:59880 - 39683 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000139361s
	[INFO] 10.244.0.19:56592 - 43985 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000095284s
	[INFO] 10.244.0.19:59880 - 7797 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000263272s
	[INFO] 10.244.0.19:56592 - 60856 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006116s
	[INFO] 10.244.0.19:59880 - 29589 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064573s
	[INFO] 10.244.0.19:56592 - 11628 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033772s
	[INFO] 10.244.0.19:56592 - 47715 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048589s
	[INFO] 10.244.0.19:59880 - 11904 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068363s
	[INFO] 10.244.0.19:59880 - 45518 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001238532s
	[INFO] 10.244.0.19:56592 - 57162 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001320787s
	[INFO] 10.244.0.19:56592 - 5270 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000850579s
	[INFO] 10.244.0.19:59880 - 2578 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001088243s
	[INFO] 10.244.0.19:56592 - 20425 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053053s
	[INFO] 10.244.0.19:59880 - 55720 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067748s
	
	
	==> describe nodes <==
	Name:               addons-944407
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-944407
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=addons-944407
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T10_51_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-944407
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 10:51:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-944407
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 10:57:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 10:57:02 +0000   Mon, 15 Jan 2024 10:51:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 10:57:02 +0000   Mon, 15 Jan 2024 10:51:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 10:57:02 +0000   Mon, 15 Jan 2024 10:51:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 10:57:02 +0000   Mon, 15 Jan 2024 10:52:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-944407
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 59906070b1484f2bb3eff9acbfa9a1af
	  System UUID:                97d20689-c7fa-4555-8f52-be313797b160
	  Boot ID:                    2320f45f-1c30-479b-83e7-a1d3daee01d1
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-f62vd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  gcp-auth                    gcp-auth-d4c87556c-k9c6b                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  headlamp                    headlamp-7ddfbb94ff-72t6n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 coredns-5dd5756b68-mcrmc                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m2s
	  kube-system                 etcd-addons-944407                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kindnet-sdlzq                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m2s
	  kube-system                 kube-apiserver-addons-944407               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-addons-944407      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-proxy-7tlcp                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-addons-944407               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  local-path-storage          local-path-provisioner-78b46b4d5c-bm2qj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-8bsq2             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     4m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m55s  kube-proxy       
	  Normal  Starting                 5m14s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m14s  kubelet          Node addons-944407 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s  kubelet          Node addons-944407 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s  kubelet          Node addons-944407 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m3s   node-controller  Node addons-944407 event: Registered Node addons-944407 in Controller
	  Normal  NodeReady                4m28s  kubelet          Node addons-944407 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000776] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001036] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=000000004eeaad6b
	[  +0.001160] FS-Cache: N-key=[8] 'a0643b0000000000'
	[  +0.004246] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001067] FS-Cache: O-cookie d=00000000d00daa15{9p.inode} n=0000000092ba99e0
	[  +0.001150] FS-Cache: O-key=[8] 'a0643b0000000000'
	[  +0.000824] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001046] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=00000000e2a492cd
	[  +0.001165] FS-Cache: N-key=[8] 'a0643b0000000000'
	[  +3.571357] FS-Cache: Duplicate cookie detected
	[  +0.000742] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001051] FS-Cache: O-cookie d=00000000d00daa15{9p.inode} n=000000002dc74ee5
	[  +0.001242] FS-Cache: O-key=[8] '9f643b0000000000'
	[  +0.000761] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001063] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=00000000bd356433
	[  +0.001122] FS-Cache: N-key=[8] '9f643b0000000000'
	[  +0.406002] FS-Cache: Duplicate cookie detected
	[  +0.000766] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001061] FS-Cache: O-cookie d=00000000d00daa15{9p.inode} n=00000000dec4625c
	[  +0.001217] FS-Cache: O-key=[8] 'a5643b0000000000'
	[  +0.000785] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.001056] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=000000004ab240e5
	[  +0.001199] FS-Cache: N-key=[8] 'a5643b0000000000'
	[Jan15 09:57] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	
	
	==> etcd [2dbdfdb0b50a2162d98c7abf923aab37e085edb19fc3fa82a7ec7f63804cf9ba] <==
	{"level":"info","ts":"2024-01-15T10:51:50.597168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-15T10:51:50.597203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-15T10:51:50.598111Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-944407 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T10:51:50.59818Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T10:51:50.599262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T10:51:50.602381Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T10:51:50.60351Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T10:51:50.603629Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T10:51:50.603704Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T10:51:50.606395Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T10:51:50.607388Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-15T10:51:50.607485Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T10:51:50.607499Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-15T10:52:10.779886Z","caller":"traceutil/trace.go:171","msg":"trace[664225573] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"166.416486ms","start":"2024-01-15T10:52:10.613453Z","end":"2024-01-15T10:52:10.77987Z","steps":["trace[664225573] 'process raft request'  (duration: 163.244095ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:52:10.78038Z","caller":"traceutil/trace.go:171","msg":"trace[2078466296] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"139.94748ms","start":"2024-01-15T10:52:10.640424Z","end":"2024-01-15T10:52:10.780371Z","steps":["trace[2078466296] 'process raft request'  (duration: 139.863503ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:52:11.661214Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.631693ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128026513393177450 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet-sdlzq.17aa7fcdf1c11b85\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-sdlzq.17aa7fcdf1c11b85\" value_size:630 lease:8128026513393177156 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-01-15T10:52:11.666969Z","caller":"traceutil/trace.go:171","msg":"trace[40262737] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"142.371755ms","start":"2024-01-15T10:52:11.524586Z","end":"2024-01-15T10:52:11.666958Z","steps":["trace[40262737] 'process raft request'  (duration: 12.498062ms)","trace[40262737] 'compare'  (duration: 123.533972ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-15T10:52:11.666903Z","caller":"traceutil/trace.go:171","msg":"trace[1654260346] linearizableReadLoop","detail":"{readStateIndex:379; appliedIndex:378; }","duration":"130.954363ms","start":"2024-01-15T10:52:11.535929Z","end":"2024-01-15T10:52:11.666883Z","steps":["trace[1654260346] 'read index received'  (duration: 1.10421ms)","trace[1654260346] 'applied index is now lower than readState.Index'  (duration: 129.847503ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-15T10:52:11.673281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.357752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-public/\" range_end:\"/registry/serviceaccounts/kube-public0\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2024-01-15T10:52:11.673318Z","caller":"traceutil/trace.go:171","msg":"trace[914463831] range","detail":"{range_begin:/registry/serviceaccounts/kube-public/; range_end:/registry/serviceaccounts/kube-public0; response_count:1; response_revision:366; }","duration":"137.407344ms","start":"2024-01-15T10:52:11.535899Z","end":"2024-01-15T10:52:11.673307Z","steps":["trace[914463831] 'agreement among raft nodes before linearized reading'  (duration: 137.296044ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-15T10:52:11.674065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.0427ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-01-15T10:52:11.674101Z","caller":"traceutil/trace.go:171","msg":"trace[2033977149] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:366; }","duration":"138.080278ms","start":"2024-01-15T10:52:11.536011Z","end":"2024-01-15T10:52:11.674091Z","steps":["trace[2033977149] 'agreement among raft nodes before linearized reading'  (duration: 137.984477ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:52:11.778626Z","caller":"traceutil/trace.go:171","msg":"trace[236300279] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"104.077794ms","start":"2024-01-15T10:52:11.674531Z","end":"2024-01-15T10:52:11.778609Z","steps":["trace[236300279] 'process raft request'  (duration: 98.603881ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:52:13.799083Z","caller":"traceutil/trace.go:171","msg":"trace[2037705167] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"102.904965ms","start":"2024-01-15T10:52:13.696163Z","end":"2024-01-15T10:52:13.799068Z","steps":["trace[2037705167] 'process raft request'  (duration: 102.813858ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-15T10:52:14.138217Z","caller":"traceutil/trace.go:171","msg":"trace[916615804] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"138.147879ms","start":"2024-01-15T10:52:14.00005Z","end":"2024-01-15T10:52:14.138198Z","steps":["trace[916615804] 'process raft request'  (duration: 84.053974ms)","trace[916615804] 'compare'  (duration: 35.156342ms)"],"step_count":2}
	
	
	==> gcp-auth [8a3786415e0a5cb351606e06b7edf975e7c38122891d7b42b837bda43dc80380] <==
	2024/01/15 10:53:49 GCP Auth Webhook started!
	2024/01/15 10:54:03 Ready to marshal response ...
	2024/01/15 10:54:03 Ready to write response ...
	2024/01/15 10:54:11 Ready to marshal response ...
	2024/01/15 10:54:11 Ready to write response ...
	2024/01/15 10:54:27 Ready to marshal response ...
	2024/01/15 10:54:27 Ready to write response ...
	2024/01/15 10:54:43 Ready to marshal response ...
	2024/01/15 10:54:43 Ready to write response ...
	2024/01/15 10:55:13 Ready to marshal response ...
	2024/01/15 10:55:13 Ready to write response ...
	2024/01/15 10:55:13 Ready to marshal response ...
	2024/01/15 10:55:13 Ready to write response ...
	2024/01/15 10:55:20 Ready to marshal response ...
	2024/01/15 10:55:20 Ready to write response ...
	2024/01/15 10:55:29 Ready to marshal response ...
	2024/01/15 10:55:29 Ready to write response ...
	2024/01/15 10:55:29 Ready to marshal response ...
	2024/01/15 10:55:29 Ready to write response ...
	2024/01/15 10:55:29 Ready to marshal response ...
	2024/01/15 10:55:29 Ready to write response ...
	2024/01/15 10:56:46 Ready to marshal response ...
	2024/01/15 10:56:46 Ready to write response ...
	
	
	==> kernel <==
	 10:57:11 up  9:39,  0 users,  load average: 1.08, 1.40, 2.09
	Linux addons-944407 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [571fdeed778920c9157e47213aea7c0a80be14c879fad524eedd02c48e47bea5] <==
	I0115 10:55:02.849261       1 main.go:227] handling current node
	I0115 10:55:12.870194       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:55:12.870331       1 main.go:227] handling current node
	I0115 10:55:22.881617       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:55:22.881647       1 main.go:227] handling current node
	I0115 10:55:32.894802       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:55:32.894832       1 main.go:227] handling current node
	I0115 10:55:42.899419       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:55:42.899451       1 main.go:227] handling current node
	I0115 10:55:52.911169       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:55:52.911197       1 main.go:227] handling current node
	I0115 10:56:02.914895       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:56:02.914923       1 main.go:227] handling current node
	I0115 10:56:12.923190       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:56:12.923217       1 main.go:227] handling current node
	I0115 10:56:22.927640       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:56:22.927676       1 main.go:227] handling current node
	I0115 10:56:32.931628       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:56:32.931659       1 main.go:227] handling current node
	I0115 10:56:42.938517       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:56:42.938543       1 main.go:227] handling current node
	I0115 10:56:52.950348       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:56:52.950375       1 main.go:227] handling current node
	I0115 10:57:02.961939       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 10:57:02.961970       1 main.go:227] handling current node
	
	
	==> kube-apiserver [004afef6057961aca924edbe29eccd65bcdd18c33c510cdf2ee350344c889ce2] <==
	I0115 10:54:24.938496       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0115 10:54:26.734401       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0115 10:54:27.180447       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.140.96"}
	I0115 10:55:00.959184       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 10:55:00.959315       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 10:55:00.966957       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 10:55:00.967017       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 10:55:00.985534       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 10:55:00.985579       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 10:55:00.996441       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 10:55:00.996487       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 10:55:01.001337       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 10:55:01.001397       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 10:55:01.011235       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 10:55:01.011283       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 10:55:01.026143       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 10:55:01.026196       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 10:55:01.038484       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 10:55:01.038534       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0115 10:55:01.996917       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0115 10:55:02.039331       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0115 10:55:02.061319       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0115 10:55:04.539145       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0115 10:55:29.354465       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.132.83"}
	I0115 10:56:46.327434       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.198.210"}
	
	
	==> kube-controller-manager [c8206d4ed0945305cc89739ca7157e720f9fe5307a62f7ea775b5c3216935d60] <==
	E0115 10:55:44.496754       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 10:56:17.433187       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 10:56:17.433225       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 10:56:22.110912       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 10:56:22.110946       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 10:56:22.786347       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 10:56:22.786448       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 10:56:24.787432       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 10:56:24.787468       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 10:56:46.034611       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0115 10:56:46.054252       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-f62vd"
	I0115 10:56:46.075927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.616281ms"
	I0115 10:56:46.104322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="28.323909ms"
	I0115 10:56:46.104399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.32µs"
	I0115 10:56:48.466622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="107.091µs"
	I0115 10:56:49.473134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.696µs"
	I0115 10:56:50.467305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="85.782µs"
	W0115 10:56:52.231643       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 10:56:52.231675       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 10:57:00.970873       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 10:57:00.970906       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 10:57:01.504361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.707µs"
	I0115 10:57:03.210834       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0115 10:57:03.215782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="4.997µs"
	I0115 10:57:03.221846       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	
	==> kube-proxy [e0f637bdadd6fd1a149238e03a37807f415eb4baaf03e15eeb3502572fc8e51e] <==
	I0115 10:52:16.413256       1 server_others.go:69] "Using iptables proxy"
	I0115 10:52:16.501685       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0115 10:52:16.692686       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0115 10:52:16.695826       1 server_others.go:152] "Using iptables Proxier"
	I0115 10:52:16.695860       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0115 10:52:16.695867       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0115 10:52:16.697705       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 10:52:16.699168       1 server.go:846] "Version info" version="v1.28.4"
	I0115 10:52:16.699192       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 10:52:16.700162       1 config.go:188] "Starting service config controller"
	I0115 10:52:16.702001       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 10:52:16.702047       1 config.go:97] "Starting endpoint slice config controller"
	I0115 10:52:16.702061       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 10:52:16.708002       1 config.go:315] "Starting node config controller"
	I0115 10:52:16.709549       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 10:52:16.804881       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 10:52:16.805011       1 shared_informer.go:318] Caches are synced for service config
	I0115 10:52:16.810103       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2e22fc0f3b1520c22575e9a2223b7b12df0efea067b94a1cd30af64d2ec15c24] <==
	W0115 10:51:54.077743       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 10:51:54.077781       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 10:51:54.976229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 10:51:54.976359       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 10:51:55.004411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 10:51:55.004465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 10:51:55.018441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 10:51:55.018510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 10:51:55.049612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 10:51:55.049651       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 10:51:55.067919       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0115 10:51:55.067961       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0115 10:51:55.110689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 10:51:55.110730       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 10:51:55.118850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 10:51:55.118982       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 10:51:55.193799       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 10:51:55.193834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0115 10:51:55.218012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 10:51:55.218058       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0115 10:51:55.275845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 10:51:55.275956       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0115 10:51:55.322045       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 10:51:55.322198       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0115 10:51:57.373866       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 15 10:56:57 addons-944407 kubelet[1345]: E0115 10:56:57.424849    1345 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d5813e031c54992121c1a8b9450e8ec735b9362651d443a1303cdc7c2f629164/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d5813e031c54992121c1a8b9450e8ec735b9362651d443a1303cdc7c2f629164/diff: no such file or directory, extraDiskErr: <nil>
	Jan 15 10:56:57 addons-944407 kubelet[1345]: E0115 10:56:57.425935    1345 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/27209ac58a9a04e26b59e379457a18c92efa113fb0bb30a55f6c09a54dcba9f3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/27209ac58a9a04e26b59e379457a18c92efa113fb0bb30a55f6c09a54dcba9f3/diff: no such file or directory, extraDiskErr: <nil>
	Jan 15 10:56:57 addons-944407 kubelet[1345]: E0115 10:56:57.429056    1345 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/de3644dc35d54e7a8019aa9a632622d43f26a442d0e70f22c576b7ca398ae78e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/de3644dc35d54e7a8019aa9a632622d43f26a442d0e70f22c576b7ca398ae78e/diff: no such file or directory, extraDiskErr: <nil>
	Jan 15 10:57:01 addons-944407 kubelet[1345]: I0115 10:57:01.213881    1345 scope.go:117] "RemoveContainer" containerID="ac4eae806d89532ee6bd96b51f0a2019320c732442fd9f2b9857309440814ad0"
	Jan 15 10:57:01 addons-944407 kubelet[1345]: I0115 10:57:01.477697    1345 scope.go:117] "RemoveContainer" containerID="ac4eae806d89532ee6bd96b51f0a2019320c732442fd9f2b9857309440814ad0"
	Jan 15 10:57:01 addons-944407 kubelet[1345]: I0115 10:57:01.477993    1345 scope.go:117] "RemoveContainer" containerID="3f7a1d85223ba152b208facff7584ff9cb160ab25371afd609ad67d9a46a5df3"
	Jan 15 10:57:01 addons-944407 kubelet[1345]: E0115 10:57:01.478252    1345 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-f62vd_default(3bc52201-6905-4fb8-9e73-14c89e9da107)\"" pod="default/hello-world-app-5d77478584-f62vd" podUID="3bc52201-6905-4fb8-9e73-14c89e9da107"
	Jan 15 10:57:02 addons-944407 kubelet[1345]: I0115 10:57:02.301550    1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxr8j\" (UniqueName: \"kubernetes.io/projected/6a29df1d-e7cf-4984-80f5-a26c40fc0a4a-kube-api-access-jxr8j\") pod \"6a29df1d-e7cf-4984-80f5-a26c40fc0a4a\" (UID: \"6a29df1d-e7cf-4984-80f5-a26c40fc0a4a\") "
	Jan 15 10:57:02 addons-944407 kubelet[1345]: I0115 10:57:02.306344    1345 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a29df1d-e7cf-4984-80f5-a26c40fc0a4a-kube-api-access-jxr8j" (OuterVolumeSpecName: "kube-api-access-jxr8j") pod "6a29df1d-e7cf-4984-80f5-a26c40fc0a4a" (UID: "6a29df1d-e7cf-4984-80f5-a26c40fc0a4a"). InnerVolumeSpecName "kube-api-access-jxr8j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 10:57:02 addons-944407 kubelet[1345]: I0115 10:57:02.402273    1345 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jxr8j\" (UniqueName: \"kubernetes.io/projected/6a29df1d-e7cf-4984-80f5-a26c40fc0a4a-kube-api-access-jxr8j\") on node \"addons-944407\" DevicePath \"\""
	Jan 15 10:57:02 addons-944407 kubelet[1345]: I0115 10:57:02.482446    1345 scope.go:117] "RemoveContainer" containerID="6be972dbb541c2ca3c79a49531722a6e08c302456f3e569a32b26e0e28409f7f"
	Jan 15 10:57:03 addons-944407 kubelet[1345]: I0115 10:57:03.215317    1345 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6a29df1d-e7cf-4984-80f5-a26c40fc0a4a" path="/var/lib/kubelet/pods/6a29df1d-e7cf-4984-80f5-a26c40fc0a4a/volumes"
	Jan 15 10:57:05 addons-944407 kubelet[1345]: I0115 10:57:05.215168    1345 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="18b8977d-3fbc-46f8-bd4d-20f57461fc37" path="/var/lib/kubelet/pods/18b8977d-3fbc-46f8-bd4d-20f57461fc37/volumes"
	Jan 15 10:57:05 addons-944407 kubelet[1345]: I0115 10:57:05.215598    1345 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="19777f6e-ee42-4d70-a966-9c9e5019ebda" path="/var/lib/kubelet/pods/19777f6e-ee42-4d70-a966-9c9e5019ebda/volumes"
	Jan 15 10:57:06 addons-944407 kubelet[1345]: I0115 10:57:06.492193    1345 scope.go:117] "RemoveContainer" containerID="817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa"
	Jan 15 10:57:06 addons-944407 kubelet[1345]: I0115 10:57:06.512685    1345 scope.go:117] "RemoveContainer" containerID="817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa"
	Jan 15 10:57:06 addons-944407 kubelet[1345]: E0115 10:57:06.513074    1345 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa\": container with ID starting with 817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa not found: ID does not exist" containerID="817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa"
	Jan 15 10:57:06 addons-944407 kubelet[1345]: I0115 10:57:06.513118    1345 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa"} err="failed to get container status \"817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa\": rpc error: code = NotFound desc = could not find container \"817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa\": container with ID starting with 817525992a31e7a38cca779fb0523b549382ac342a9a8455b061efd114a666fa not found: ID does not exist"
	Jan 15 10:57:06 addons-944407 kubelet[1345]: I0115 10:57:06.536008    1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b450b63-d351-4465-a521-a552650162a9-webhook-cert\") pod \"4b450b63-d351-4465-a521-a552650162a9\" (UID: \"4b450b63-d351-4465-a521-a552650162a9\") "
	Jan 15 10:57:06 addons-944407 kubelet[1345]: I0115 10:57:06.536064    1345 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l445b\" (UniqueName: \"kubernetes.io/projected/4b450b63-d351-4465-a521-a552650162a9-kube-api-access-l445b\") pod \"4b450b63-d351-4465-a521-a552650162a9\" (UID: \"4b450b63-d351-4465-a521-a552650162a9\") "
	Jan 15 10:57:06 addons-944407 kubelet[1345]: I0115 10:57:06.538590    1345 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b450b63-d351-4465-a521-a552650162a9-kube-api-access-l445b" (OuterVolumeSpecName: "kube-api-access-l445b") pod "4b450b63-d351-4465-a521-a552650162a9" (UID: "4b450b63-d351-4465-a521-a552650162a9"). InnerVolumeSpecName "kube-api-access-l445b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 10:57:06 addons-944407 kubelet[1345]: I0115 10:57:06.539435    1345 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b450b63-d351-4465-a521-a552650162a9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4b450b63-d351-4465-a521-a552650162a9" (UID: "4b450b63-d351-4465-a521-a552650162a9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 10:57:06 addons-944407 kubelet[1345]: I0115 10:57:06.636957    1345 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b450b63-d351-4465-a521-a552650162a9-webhook-cert\") on node \"addons-944407\" DevicePath \"\""
	Jan 15 10:57:06 addons-944407 kubelet[1345]: I0115 10:57:06.637003    1345 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l445b\" (UniqueName: \"kubernetes.io/projected/4b450b63-d351-4465-a521-a552650162a9-kube-api-access-l445b\") on node \"addons-944407\" DevicePath \"\""
	Jan 15 10:57:07 addons-944407 kubelet[1345]: I0115 10:57:07.215254    1345 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4b450b63-d351-4465-a521-a552650162a9" path="/var/lib/kubelet/pods/4b450b63-d351-4465-a521-a552650162a9/volumes"
	
	
	==> storage-provisioner [c98ef84383e7ea34bf46701ce94f0311133b29788fd17bec5250caf41fc43f7f] <==
	I0115 10:52:44.061445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 10:52:44.076860       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 10:52:44.077096       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 10:52:44.085579       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 10:52:44.085788       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c3b57c1-e71d-4885-9c18-c6dc2e7b4115", APIVersion:"v1", ResourceVersion:"890", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-944407_01e6bd4e-a4fe-423e-8ed0-22b792be9755 became leader
	I0115 10:52:44.087397       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-944407_01e6bd4e-a4fe-423e-8ed0-22b792be9755!
	I0115 10:52:44.188283       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-944407_01e6bd4e-a4fe-423e-8ed0-22b792be9755!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-944407 -n addons-944407
helpers_test.go:261: (dbg) Run:  kubectl --context addons-944407 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (166.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (183.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-406064 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0115 11:04:18.899876 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-406064 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.91144873s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-406064 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-406064 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [48c0af98-a524-4e8d-8d86-189495e3f8b2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [48c0af98-a524-4e8d-8d86-189495e3f8b2] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.003051545s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-406064 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0115 11:06:16.697144 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:16.702456 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:16.712760 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:16.733037 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:16.773287 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:16.853585 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:17.014034 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:17.334612 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:17.975465 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:19.255922 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:21.816281 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:26.936497 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:06:37.177014 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-406064 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.284531797s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-406064 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-406064 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.020002795s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-406064 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-406064 addons disable ingress-dns --alsologtostderr -v=1: (1.967983435s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-406064 addons disable ingress --alsologtostderr -v=1
E0115 11:06:57.657531 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-406064 addons disable ingress --alsologtostderr -v=1: (7.545556244s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-406064
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-406064:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7041653abb04088193df62c8903835ea12ed97630dcb9a8a7318588b0b1d4bfd",
	        "Created": "2024-01-15T11:02:42.884581396Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1657822,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T11:02:43.192156563Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/7041653abb04088193df62c8903835ea12ed97630dcb9a8a7318588b0b1d4bfd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7041653abb04088193df62c8903835ea12ed97630dcb9a8a7318588b0b1d4bfd/hostname",
	        "HostsPath": "/var/lib/docker/containers/7041653abb04088193df62c8903835ea12ed97630dcb9a8a7318588b0b1d4bfd/hosts",
	        "LogPath": "/var/lib/docker/containers/7041653abb04088193df62c8903835ea12ed97630dcb9a8a7318588b0b1d4bfd/7041653abb04088193df62c8903835ea12ed97630dcb9a8a7318588b0b1d4bfd-json.log",
	        "Name": "/ingress-addon-legacy-406064",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-406064:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-406064",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/93d768e9a78f77f78d268fb12b2b9e4d87c418118bd77629a93736615f036267-init/diff:/var/lib/docker/overlay2/875764cb66056ccf89d3b82171ed27a7d9d817926a8469405b5a9bf1621232cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/93d768e9a78f77f78d268fb12b2b9e4d87c418118bd77629a93736615f036267/merged",
	                "UpperDir": "/var/lib/docker/overlay2/93d768e9a78f77f78d268fb12b2b9e4d87c418118bd77629a93736615f036267/diff",
	                "WorkDir": "/var/lib/docker/overlay2/93d768e9a78f77f78d268fb12b2b9e4d87c418118bd77629a93736615f036267/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-406064",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-406064/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-406064",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-406064",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-406064",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8aa6ee8deeee258a9ef3b50c8473cdd581d67d7a38c891d32f1eeadc3415caed",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34734"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34733"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34730"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34732"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34731"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8aa6ee8deeee",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-406064": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7041653abb04",
	                        "ingress-addon-legacy-406064"
	                    ],
	                    "NetworkID": "290f1797e319a5e839d2fbcfbdc9acb4bfb7b97b73e73e2fd8725b7ebd93fe04",
	                    "EndpointID": "900cadfbd06d3e1a1f5387facec55646ec0ea8733e11d4d17365ca222f2c786d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-406064 -n ingress-addon-legacy-406064
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-406064 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-406064 logs -n 25: (1.418479932s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-641147 image load --daemon                                  | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-641147               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-641147 image ls                                             | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	| image   | functional-641147 image load --daemon                                  | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-641147               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-641147 image ls                                             | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	| image   | functional-641147 image save                                           | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-641147               |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-641147 image rm                                             | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-641147               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-641147 image ls                                             | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	| image   | functional-641147 image load                                           | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-641147 image ls                                             | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	| image   | functional-641147 image save --daemon                                  | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-641147               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-641147                                                      | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-641147                                                      | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-641147 ssh pgrep                                            | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-641147                                                      | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-641147 image build -t                                       | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | localhost/my-image:functional-641147                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image   | functional-641147                                                      | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-641147 image ls                                             | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	| delete  | -p functional-641147                                                   | functional-641147           | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:02 UTC |
	| start   | -p ingress-addon-legacy-406064                                         | ingress-addon-legacy-406064 | jenkins | v1.32.0 | 15 Jan 24 11:02 UTC | 15 Jan 24 11:03 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-406064                                            | ingress-addon-legacy-406064 | jenkins | v1.32.0 | 15 Jan 24 11:03 UTC | 15 Jan 24 11:04 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-406064                                            | ingress-addon-legacy-406064 | jenkins | v1.32.0 | 15 Jan 24 11:04 UTC | 15 Jan 24 11:04 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-406064                                            | ingress-addon-legacy-406064 | jenkins | v1.32.0 | 15 Jan 24 11:04 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-406064 ip                                         | ingress-addon-legacy-406064 | jenkins | v1.32.0 | 15 Jan 24 11:06 UTC | 15 Jan 24 11:06 UTC |
	| addons  | ingress-addon-legacy-406064                                            | ingress-addon-legacy-406064 | jenkins | v1.32.0 | 15 Jan 24 11:06 UTC | 15 Jan 24 11:06 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-406064                                            | ingress-addon-legacy-406064 | jenkins | v1.32.0 | 15 Jan 24 11:06 UTC | 15 Jan 24 11:07 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:02:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:02:26.015496 1657368 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:02:26.015638 1657368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:02:26.015646 1657368 out.go:309] Setting ErrFile to fd 2...
	I0115 11:02:26.015652 1657368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:02:26.015932 1657368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 11:02:26.016400 1657368 out.go:303] Setting JSON to false
	I0115 11:02:26.017255 1657368 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35088,"bootTime":1705281458,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0115 11:02:26.017334 1657368 start.go:138] virtualization:  
	I0115 11:02:26.020215 1657368 out.go:177] * [ingress-addon-legacy-406064] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 11:02:26.022473 1657368 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 11:02:26.024606 1657368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:02:26.022626 1657368 notify.go:220] Checking for updates...
	I0115 11:02:26.028804 1657368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:02:26.030999 1657368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	I0115 11:02:26.033162 1657368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 11:02:26.034968 1657368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:02:26.037764 1657368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:02:26.063516 1657368 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:02:26.063642 1657368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:02:26.151078 1657368 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-15 11:02:26.141287003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 11:02:26.151207 1657368 docker.go:295] overlay module found
	I0115 11:02:26.154021 1657368 out.go:177] * Using the docker driver based on user configuration
	I0115 11:02:26.156657 1657368 start.go:298] selected driver: docker
	I0115 11:02:26.156681 1657368 start.go:902] validating driver "docker" against <nil>
	I0115 11:02:26.156696 1657368 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:02:26.157350 1657368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:02:26.223249 1657368 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-15 11:02:26.213887178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 11:02:26.223415 1657368 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 11:02:26.223639 1657368 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 11:02:26.225809 1657368 out.go:177] * Using Docker driver with root privileges
	I0115 11:02:26.228051 1657368 cni.go:84] Creating CNI manager for ""
	I0115 11:02:26.228087 1657368 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 11:02:26.228098 1657368 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 11:02:26.228113 1657368 start_flags.go:321] config:
	{Name:ingress-addon-legacy-406064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-406064 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:02:26.230782 1657368 out.go:177] * Starting control plane node ingress-addon-legacy-406064 in cluster ingress-addon-legacy-406064
	I0115 11:02:26.233063 1657368 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 11:02:26.235050 1657368 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 11:02:26.237081 1657368 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 11:02:26.237046 1657368 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 11:02:26.254977 1657368 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 11:02:26.255000 1657368 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 11:02:26.299421 1657368 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0115 11:02:26.299444 1657368 cache.go:56] Caching tarball of preloaded images
	I0115 11:02:26.299610 1657368 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 11:02:26.301688 1657368 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0115 11:02:26.303745 1657368 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0115 11:02:26.409378 1657368 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0115 11:02:35.076213 1657368 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0115 11:02:35.076318 1657368 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0115 11:02:36.278695 1657368 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0115 11:02:36.279071 1657368 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/config.json ...
	I0115 11:02:36.279104 1657368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/config.json: {Name:mka00e93b2a1aa5c28beb164e42f7951137fd95c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:02:36.279289 1657368 cache.go:194] Successfully downloaded all kic artifacts
	I0115 11:02:36.279351 1657368 start.go:365] acquiring machines lock for ingress-addon-legacy-406064: {Name:mk83d0d7b135779d30ff323378b050874f6e37c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:02:36.279410 1657368 start.go:369] acquired machines lock for "ingress-addon-legacy-406064" in 44.43µs
	I0115 11:02:36.279433 1657368 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-406064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-406064 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 11:02:36.279498 1657368 start.go:125] createHost starting for "" (driver="docker")
	I0115 11:02:36.282259 1657368 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0115 11:02:36.282574 1657368 start.go:159] libmachine.API.Create for "ingress-addon-legacy-406064" (driver="docker")
	I0115 11:02:36.282607 1657368 client.go:168] LocalClient.Create starting
	I0115 11:02:36.282727 1657368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem
	I0115 11:02:36.282764 1657368 main.go:141] libmachine: Decoding PEM data...
	I0115 11:02:36.282784 1657368 main.go:141] libmachine: Parsing certificate...
	I0115 11:02:36.282885 1657368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem
	I0115 11:02:36.282917 1657368 main.go:141] libmachine: Decoding PEM data...
	I0115 11:02:36.282933 1657368 main.go:141] libmachine: Parsing certificate...
	I0115 11:02:36.283319 1657368 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-406064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 11:02:36.300003 1657368 cli_runner.go:211] docker network inspect ingress-addon-legacy-406064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 11:02:36.300084 1657368 network_create.go:281] running [docker network inspect ingress-addon-legacy-406064] to gather additional debugging logs...
	I0115 11:02:36.300106 1657368 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-406064
	W0115 11:02:36.316413 1657368 cli_runner.go:211] docker network inspect ingress-addon-legacy-406064 returned with exit code 1
	I0115 11:02:36.316446 1657368 network_create.go:284] error running [docker network inspect ingress-addon-legacy-406064]: docker network inspect ingress-addon-legacy-406064: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-406064 not found
	I0115 11:02:36.316460 1657368 network_create.go:286] output of [docker network inspect ingress-addon-legacy-406064]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-406064 not found
	
	** /stderr **
	I0115 11:02:36.316569 1657368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 11:02:36.333900 1657368 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400043e560}
	I0115 11:02:36.333938 1657368 network_create.go:124] attempt to create docker network ingress-addon-legacy-406064 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0115 11:02:36.334006 1657368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-406064 ingress-addon-legacy-406064
	I0115 11:02:36.401210 1657368 network_create.go:108] docker network ingress-addon-legacy-406064 192.168.49.0/24 created
	I0115 11:02:36.401244 1657368 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-406064" container
	I0115 11:02:36.401318 1657368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 11:02:36.417073 1657368 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-406064 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-406064 --label created_by.minikube.sigs.k8s.io=true
	I0115 11:02:36.439597 1657368 oci.go:103] Successfully created a docker volume ingress-addon-legacy-406064
	I0115 11:02:36.439698 1657368 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-406064-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-406064 --entrypoint /usr/bin/test -v ingress-addon-legacy-406064:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 11:02:37.947018 1657368 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-406064-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-406064 --entrypoint /usr/bin/test -v ingress-addon-legacy-406064:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.507250392s)
	I0115 11:02:37.947047 1657368 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-406064
	I0115 11:02:37.947071 1657368 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 11:02:37.947089 1657368 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 11:02:37.947169 1657368 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-406064:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 11:02:42.803626 1657368 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-406064:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.856409753s)
	I0115 11:02:42.803656 1657368 kic.go:203] duration metric: took 4.856563 seconds to extract preloaded images to volume
	W0115 11:02:42.803797 1657368 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 11:02:42.803915 1657368 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 11:02:42.868120 1657368 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-406064 --name ingress-addon-legacy-406064 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-406064 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-406064 --network ingress-addon-legacy-406064 --ip 192.168.49.2 --volume ingress-addon-legacy-406064:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 11:02:43.200540 1657368 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-406064 --format={{.State.Running}}
	I0115 11:02:43.221316 1657368 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-406064 --format={{.State.Status}}
	I0115 11:02:43.250600 1657368 cli_runner.go:164] Run: docker exec ingress-addon-legacy-406064 stat /var/lib/dpkg/alternatives/iptables
	I0115 11:02:43.323751 1657368 oci.go:144] the created container "ingress-addon-legacy-406064" has a running status.
	I0115 11:02:43.323781 1657368 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa...
	I0115 11:02:43.994956 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0115 11:02:43.995047 1657368 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 11:02:44.030885 1657368 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-406064 --format={{.State.Status}}
	I0115 11:02:44.055090 1657368 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 11:02:44.055109 1657368 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-406064 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 11:02:44.131116 1657368 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-406064 --format={{.State.Status}}
	I0115 11:02:44.170510 1657368 machine.go:88] provisioning docker machine ...
	I0115 11:02:44.170557 1657368 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-406064"
	I0115 11:02:44.170642 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:02:44.210986 1657368 main.go:141] libmachine: Using SSH client type: native
	I0115 11:02:44.211475 1657368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34734 <nil> <nil>}
	I0115 11:02:44.211494 1657368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-406064 && echo "ingress-addon-legacy-406064" | sudo tee /etc/hostname
	I0115 11:02:44.393434 1657368 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-406064
	
	I0115 11:02:44.393511 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:02:44.423644 1657368 main.go:141] libmachine: Using SSH client type: native
	I0115 11:02:44.424044 1657368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34734 <nil> <nil>}
	I0115 11:02:44.424062 1657368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-406064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-406064/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-406064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 11:02:44.579519 1657368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 11:02:44.579546 1657368 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17953-1625104/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-1625104/.minikube}
	I0115 11:02:44.579574 1657368 ubuntu.go:177] setting up certificates
	I0115 11:02:44.579586 1657368 provision.go:83] configureAuth start
	I0115 11:02:44.579643 1657368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-406064
	I0115 11:02:44.597606 1657368 provision.go:138] copyHostCerts
	I0115 11:02:44.597643 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem
	I0115 11:02:44.597673 1657368 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem, removing ...
	I0115 11:02:44.597680 1657368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem
	I0115 11:02:44.597753 1657368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem (1082 bytes)
	I0115 11:02:44.597835 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem
	I0115 11:02:44.597851 1657368 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem, removing ...
	I0115 11:02:44.597856 1657368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem
	I0115 11:02:44.597880 1657368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem (1123 bytes)
	I0115 11:02:44.597918 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem
	I0115 11:02:44.597932 1657368 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem, removing ...
	I0115 11:02:44.597936 1657368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem
	I0115 11:02:44.597961 1657368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem (1675 bytes)
	I0115 11:02:44.598031 1657368 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-406064 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-406064]
	I0115 11:02:44.901346 1657368 provision.go:172] copyRemoteCerts
	I0115 11:02:44.901438 1657368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 11:02:44.901481 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:02:44.921330 1657368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34734 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa Username:docker}
	I0115 11:02:45.025643 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 11:02:45.025730 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 11:02:45.080573 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 11:02:45.080648 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0115 11:02:45.116579 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 11:02:45.116653 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 11:02:45.153209 1657368 provision.go:86] duration metric: configureAuth took 573.608753ms
	I0115 11:02:45.153286 1657368 ubuntu.go:193] setting minikube options for container-runtime
	I0115 11:02:45.153527 1657368 config.go:182] Loaded profile config "ingress-addon-legacy-406064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0115 11:02:45.153658 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:02:45.173708 1657368 main.go:141] libmachine: Using SSH client type: native
	I0115 11:02:45.174159 1657368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34734 <nil> <nil>}
	I0115 11:02:45.174183 1657368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 11:02:45.461281 1657368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 11:02:45.461310 1657368 machine.go:91] provisioned docker machine in 1.290772678s
	I0115 11:02:45.461320 1657368 client.go:171] LocalClient.Create took 9.178702403s
	I0115 11:02:45.461333 1657368 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-406064" took 9.178763496s
	I0115 11:02:45.461341 1657368 start.go:300] post-start starting for "ingress-addon-legacy-406064" (driver="docker")
	I0115 11:02:45.461352 1657368 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 11:02:45.461431 1657368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 11:02:45.461484 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:02:45.480112 1657368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34734 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa Username:docker}
	I0115 11:02:45.581698 1657368 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 11:02:45.586067 1657368 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 11:02:45.586104 1657368 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 11:02:45.586116 1657368 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 11:02:45.586123 1657368 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 11:02:45.586134 1657368 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-1625104/.minikube/addons for local assets ...
	I0115 11:02:45.586202 1657368 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-1625104/.minikube/files for local assets ...
	I0115 11:02:45.586299 1657368 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem -> 16304352.pem in /etc/ssl/certs
	I0115 11:02:45.586310 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem -> /etc/ssl/certs/16304352.pem
	I0115 11:02:45.586429 1657368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 11:02:45.596835 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem --> /etc/ssl/certs/16304352.pem (1708 bytes)
	I0115 11:02:45.626041 1657368 start.go:303] post-start completed in 164.683445ms
	I0115 11:02:45.626542 1657368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-406064
	I0115 11:02:45.643782 1657368 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/config.json ...
	I0115 11:02:45.644091 1657368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:02:45.644153 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:02:45.661780 1657368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34734 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa Username:docker}
	I0115 11:02:45.756610 1657368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 11:02:45.762419 1657368 start.go:128] duration metric: createHost completed in 9.48290377s
	I0115 11:02:45.762445 1657368 start.go:83] releasing machines lock for "ingress-addon-legacy-406064", held for 9.483022413s
	I0115 11:02:45.762520 1657368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-406064
	I0115 11:02:45.780381 1657368 ssh_runner.go:195] Run: cat /version.json
	I0115 11:02:45.780393 1657368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 11:02:45.780441 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:02:45.780457 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:02:45.800774 1657368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34734 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa Username:docker}
	I0115 11:02:45.810382 1657368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34734 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa Username:docker}
	I0115 11:02:46.029590 1657368 ssh_runner.go:195] Run: systemctl --version
	I0115 11:02:46.035376 1657368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 11:02:46.183569 1657368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 11:02:46.188982 1657368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 11:02:46.213825 1657368 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0115 11:02:46.213899 1657368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 11:02:46.255757 1657368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 11:02:46.255780 1657368 start.go:475] detecting cgroup driver to use...
	I0115 11:02:46.255815 1657368 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 11:02:46.255876 1657368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 11:02:46.275376 1657368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 11:02:46.289080 1657368 docker.go:217] disabling cri-docker service (if available) ...
	I0115 11:02:46.289152 1657368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 11:02:46.305405 1657368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 11:02:46.322074 1657368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 11:02:46.421608 1657368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 11:02:46.527242 1657368 docker.go:233] disabling docker service ...
	I0115 11:02:46.527312 1657368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 11:02:46.549587 1657368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 11:02:46.563558 1657368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 11:02:46.655532 1657368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 11:02:46.760638 1657368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 11:02:46.774809 1657368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 11:02:46.794007 1657368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0115 11:02:46.794148 1657368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:02:46.805746 1657368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 11:02:46.805855 1657368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:02:46.817459 1657368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:02:46.829395 1657368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:02:46.841478 1657368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 11:02:46.852916 1657368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 11:02:46.863069 1657368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 11:02:46.872871 1657368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 11:02:46.977259 1657368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 11:02:47.117398 1657368 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 11:02:47.117483 1657368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 11:02:47.122147 1657368 start.go:543] Will wait 60s for crictl version
	I0115 11:02:47.122256 1657368 ssh_runner.go:195] Run: which crictl
	I0115 11:02:47.126648 1657368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 11:02:47.165999 1657368 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0115 11:02:47.166110 1657368 ssh_runner.go:195] Run: crio --version
	I0115 11:02:47.210142 1657368 ssh_runner.go:195] Run: crio --version
	I0115 11:02:47.255443 1657368 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0115 11:02:47.257542 1657368 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-406064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 11:02:47.274468 1657368 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0115 11:02:47.278894 1657368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:02:47.292177 1657368 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0115 11:02:47.292241 1657368 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 11:02:47.350489 1657368 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0115 11:02:47.350563 1657368 ssh_runner.go:195] Run: which lz4
	I0115 11:02:47.354858 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0115 11:02:47.354944 1657368 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 11:02:47.359048 1657368 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 11:02:47.359083 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0115 11:02:49.531464 1657368 crio.go:444] Took 2.176530 seconds to copy over tarball
	I0115 11:02:49.531540 1657368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 11:02:52.217315 1657368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.685745592s)
	I0115 11:02:52.217342 1657368 crio.go:451] Took 2.685855 seconds to extract the tarball
	I0115 11:02:52.217354 1657368 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 11:02:52.308830 1657368 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 11:02:52.352592 1657368 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0115 11:02:52.352617 1657368 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 11:02:52.352653 1657368 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 11:02:52.352855 1657368 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 11:02:52.352918 1657368 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 11:02:52.352997 1657368 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 11:02:52.353077 1657368 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 11:02:52.353156 1657368 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0115 11:02:52.353223 1657368 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0115 11:02:52.353293 1657368 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0115 11:02:52.354469 1657368 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 11:02:52.354922 1657368 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 11:02:52.355085 1657368 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 11:02:52.355206 1657368 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0115 11:02:52.355320 1657368 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 11:02:52.355538 1657368 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0115 11:02:52.355738 1657368 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0115 11:02:52.355891 1657368 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 11:02:52.699304 1657368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0115 11:02:52.706499 1657368 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0115 11:02:52.706793 1657368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0115 11:02:52.725360 1657368 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0115 11:02:52.725583 1657368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0115 11:02:52.728613 1657368 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0115 11:02:52.728833 1657368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0115 11:02:52.744892 1657368 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0115 11:02:52.745088 1657368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0115 11:02:52.745302 1657368 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0115 11:02:52.745436 1657368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0115 11:02:52.769450 1657368 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0115 11:02:52.769639 1657368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0115 11:02:52.794425 1657368 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0115 11:02:52.794558 1657368 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0115 11:02:52.794635 1657368 ssh_runner.go:195] Run: which crictl
	I0115 11:02:52.831239 1657368 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0115 11:02:52.831332 1657368 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 11:02:52.831414 1657368 ssh_runner.go:195] Run: which crictl
	W0115 11:02:52.855310 1657368 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0115 11:02:52.855533 1657368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 11:02:52.921916 1657368 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0115 11:02:52.921996 1657368 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0115 11:02:52.922072 1657368 ssh_runner.go:195] Run: which crictl
	I0115 11:02:52.926235 1657368 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0115 11:02:52.926355 1657368 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 11:02:52.926435 1657368 ssh_runner.go:195] Run: which crictl
	I0115 11:02:52.945290 1657368 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0115 11:02:52.945371 1657368 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 11:02:52.945461 1657368 ssh_runner.go:195] Run: which crictl
	I0115 11:02:52.945572 1657368 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0115 11:02:52.945608 1657368 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0115 11:02:52.945650 1657368 ssh_runner.go:195] Run: which crictl
	I0115 11:02:52.955064 1657368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0115 11:02:52.955272 1657368 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0115 11:02:52.955437 1657368 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 11:02:52.955489 1657368 ssh_runner.go:195] Run: which crictl
	I0115 11:02:52.955336 1657368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0115 11:02:53.099390 1657368 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0115 11:02:53.099801 1657368 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 11:02:53.099837 1657368 ssh_runner.go:195] Run: which crictl
	I0115 11:02:53.099561 1657368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0115 11:02:53.099593 1657368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0115 11:02:53.099625 1657368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 11:02:53.099657 1657368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0115 11:02:53.099695 1657368 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0115 11:02:53.099742 1657368 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0115 11:02:53.099762 1657368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0115 11:02:53.227796 1657368 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0115 11:02:53.227852 1657368 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0115 11:02:53.227872 1657368 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0115 11:02:53.227942 1657368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 11:02:53.228018 1657368 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0115 11:02:53.228081 1657368 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0115 11:02:53.292531 1657368 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0115 11:02:53.292606 1657368 cache_images.go:92] LoadImages completed in 939.976749ms
	W0115 11:02:53.292689 1657368 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0115 11:02:53.292756 1657368 ssh_runner.go:195] Run: crio config
	I0115 11:02:53.352297 1657368 cni.go:84] Creating CNI manager for ""
	I0115 11:02:53.352316 1657368 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 11:02:53.352347 1657368 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 11:02:53.352367 1657368 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-406064 NodeName:ingress-addon-legacy-406064 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 11:02:53.352507 1657368 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-406064"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 11:02:53.352578 1657368 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-406064 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-406064 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 11:02:53.352640 1657368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0115 11:02:53.363032 1657368 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 11:02:53.363104 1657368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 11:02:53.373389 1657368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0115 11:02:53.394053 1657368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0115 11:02:53.414767 1657368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0115 11:02:53.435896 1657368 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0115 11:02:53.440261 1657368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:02:53.452867 1657368 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064 for IP: 192.168.49.2
	I0115 11:02:53.452902 1657368 certs.go:190] acquiring lock for shared ca certs: {Name:mk2a63925baba8534769a012921a3873667cd449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:02:53.453025 1657368 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key
	I0115 11:02:53.453077 1657368 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key
	I0115 11:02:53.453128 1657368 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.key
	I0115 11:02:53.453142 1657368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt with IP's: []
	I0115 11:02:53.837811 1657368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt ...
	I0115 11:02:53.837844 1657368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: {Name:mk75477c1403c0fcdeee4a95bace72f585253387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:02:53.838040 1657368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.key ...
	I0115 11:02:53.838053 1657368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.key: {Name:mk4ee0ecedd0ea7fe4db0444a3e4371c93d8b43c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:02:53.838139 1657368 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.key.dd3b5fb2
	I0115 11:02:53.838159 1657368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 11:02:54.406000 1657368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.crt.dd3b5fb2 ...
	I0115 11:02:54.406031 1657368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.crt.dd3b5fb2: {Name:mk5424b01945e26f9dd1f415682c91009ee443af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:02:54.406206 1657368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.key.dd3b5fb2 ...
	I0115 11:02:54.406218 1657368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.key.dd3b5fb2: {Name:mkfb718e5739dd1c72337f7a7ec247177b098adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:02:54.406316 1657368 certs.go:337] copying /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.crt
	I0115 11:02:54.406401 1657368 certs.go:341] copying /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.key
	I0115 11:02:54.406460 1657368 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.key
	I0115 11:02:54.406475 1657368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.crt with IP's: []
	I0115 11:02:54.910841 1657368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.crt ...
	I0115 11:02:54.910871 1657368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.crt: {Name:mk29dbbf1cffdfd56dc88b99a02a050d4bb67121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:02:54.911053 1657368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.key ...
	I0115 11:02:54.911067 1657368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.key: {Name:mk4b3e29037816ca5c1f65e2ddd99ed413fe81ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:02:54.911144 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 11:02:54.911165 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 11:02:54.911179 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 11:02:54.911198 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 11:02:54.911222 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 11:02:54.911237 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 11:02:54.911250 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 11:02:54.911264 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 11:02:54.911318 1657368 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435.pem (1338 bytes)
	W0115 11:02:54.911358 1657368 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435_empty.pem, impossibly tiny 0 bytes
	I0115 11:02:54.911373 1657368 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 11:02:54.911398 1657368 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem (1082 bytes)
	I0115 11:02:54.911434 1657368 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem (1123 bytes)
	I0115 11:02:54.911464 1657368 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem (1675 bytes)
	I0115 11:02:54.911513 1657368 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem (1708 bytes)
	I0115 11:02:54.911552 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:02:54.911566 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435.pem -> /usr/share/ca-certificates/1630435.pem
	I0115 11:02:54.911577 1657368 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem -> /usr/share/ca-certificates/16304352.pem
	I0115 11:02:54.912150 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 11:02:54.940223 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 11:02:54.968842 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 11:02:54.997467 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0115 11:02:55.035184 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 11:02:55.065805 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 11:02:55.096028 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 11:02:55.126389 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 11:02:55.158492 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 11:02:55.188969 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435.pem --> /usr/share/ca-certificates/1630435.pem (1338 bytes)
	I0115 11:02:55.218382 1657368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem --> /usr/share/ca-certificates/16304352.pem (1708 bytes)
	I0115 11:02:55.246785 1657368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 11:02:55.267681 1657368 ssh_runner.go:195] Run: openssl version
	I0115 11:02:55.274426 1657368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16304352.pem && ln -fs /usr/share/ca-certificates/16304352.pem /etc/ssl/certs/16304352.pem"
	I0115 11:02:55.285823 1657368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16304352.pem
	I0115 11:02:55.290207 1657368 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 10:58 /usr/share/ca-certificates/16304352.pem
	I0115 11:02:55.290332 1657368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16304352.pem
	I0115 11:02:55.298573 1657368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16304352.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 11:02:55.309567 1657368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 11:02:55.321078 1657368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:02:55.325665 1657368 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:02:55.325760 1657368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:02:55.333987 1657368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 11:02:55.345145 1657368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1630435.pem && ln -fs /usr/share/ca-certificates/1630435.pem /etc/ssl/certs/1630435.pem"
	I0115 11:02:55.356448 1657368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1630435.pem
	I0115 11:02:55.361012 1657368 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 10:58 /usr/share/ca-certificates/1630435.pem
	I0115 11:02:55.361113 1657368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1630435.pem
	I0115 11:02:55.369569 1657368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1630435.pem /etc/ssl/certs/51391683.0"
	I0115 11:02:55.381064 1657368 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 11:02:55.385465 1657368 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 11:02:55.385534 1657368 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-406064 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-406064 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:02:55.385613 1657368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 11:02:55.385676 1657368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 11:02:55.426536 1657368 cri.go:89] found id: ""
	I0115 11:02:55.426664 1657368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 11:02:55.437268 1657368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 11:02:55.448079 1657368 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 11:02:55.448173 1657368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 11:02:55.458856 1657368 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 11:02:55.459003 1657368 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 11:02:55.514202 1657368 kubeadm.go:322] W0115 11:02:55.513655    1232 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0115 11:02:55.569239 1657368 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0115 11:02:55.670544 1657368 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 11:03:01.822538 1657368 kubeadm.go:322] W0115 11:03:01.822087    1232 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 11:03:01.824138 1657368 kubeadm.go:322] W0115 11:03:01.823770    1232 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 11:03:15.311460 1657368 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0115 11:03:15.311512 1657368 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 11:03:15.311593 1657368 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 11:03:15.311645 1657368 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0115 11:03:15.311677 1657368 kubeadm.go:322] OS: Linux
	I0115 11:03:15.311719 1657368 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 11:03:15.311764 1657368 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 11:03:15.311808 1657368 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 11:03:15.311852 1657368 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 11:03:15.311901 1657368 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 11:03:15.311950 1657368 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 11:03:15.312017 1657368 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 11:03:15.312114 1657368 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 11:03:15.312201 1657368 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 11:03:15.312296 1657368 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 11:03:15.312372 1657368 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 11:03:15.312408 1657368 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 11:03:15.312468 1657368 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 11:03:15.315679 1657368 out.go:204]   - Generating certificates and keys ...
	I0115 11:03:15.315775 1657368 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 11:03:15.315837 1657368 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 11:03:15.315899 1657368 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 11:03:15.315951 1657368 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 11:03:15.316007 1657368 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 11:03:15.316053 1657368 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 11:03:15.316102 1657368 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 11:03:15.316223 1657368 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-406064 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 11:03:15.316277 1657368 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 11:03:15.316395 1657368 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-406064 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 11:03:15.316455 1657368 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 11:03:15.316513 1657368 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 11:03:15.316554 1657368 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 11:03:15.316605 1657368 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 11:03:15.316654 1657368 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 11:03:15.316702 1657368 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 11:03:15.316760 1657368 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 11:03:15.316810 1657368 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 11:03:15.316870 1657368 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 11:03:15.321020 1657368 out.go:204]   - Booting up control plane ...
	I0115 11:03:15.321191 1657368 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 11:03:15.321279 1657368 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 11:03:15.321366 1657368 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 11:03:15.321467 1657368 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 11:03:15.321632 1657368 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 11:03:15.321720 1657368 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002353 seconds
	I0115 11:03:15.321837 1657368 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 11:03:15.321967 1657368 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 11:03:15.322035 1657368 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 11:03:15.322189 1657368 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-406064 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0115 11:03:15.322257 1657368 kubeadm.go:322] [bootstrap-token] Using token: 4nlu0y.dtcvn87vi30zwzrf
	I0115 11:03:15.324467 1657368 out.go:204]   - Configuring RBAC rules ...
	I0115 11:03:15.324652 1657368 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 11:03:15.324739 1657368 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 11:03:15.324878 1657368 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 11:03:15.325011 1657368 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 11:03:15.325125 1657368 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 11:03:15.325210 1657368 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 11:03:15.325324 1657368 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 11:03:15.325381 1657368 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 11:03:15.325427 1657368 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 11:03:15.325433 1657368 kubeadm.go:322] 
	I0115 11:03:15.325493 1657368 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 11:03:15.325498 1657368 kubeadm.go:322] 
	I0115 11:03:15.325575 1657368 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 11:03:15.325579 1657368 kubeadm.go:322] 
	I0115 11:03:15.325604 1657368 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 11:03:15.325663 1657368 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 11:03:15.325713 1657368 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 11:03:15.325718 1657368 kubeadm.go:322] 
	I0115 11:03:15.325770 1657368 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 11:03:15.325845 1657368 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 11:03:15.325913 1657368 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 11:03:15.325917 1657368 kubeadm.go:322] 
	I0115 11:03:15.326000 1657368 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 11:03:15.326076 1657368 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 11:03:15.326081 1657368 kubeadm.go:322] 
	I0115 11:03:15.326165 1657368 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4nlu0y.dtcvn87vi30zwzrf \
	I0115 11:03:15.326271 1657368 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 \
	I0115 11:03:15.326320 1657368 kubeadm.go:322]     --control-plane 
	I0115 11:03:15.326327 1657368 kubeadm.go:322] 
	I0115 11:03:15.326412 1657368 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 11:03:15.326416 1657368 kubeadm.go:322] 
	I0115 11:03:15.326497 1657368 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4nlu0y.dtcvn87vi30zwzrf \
	I0115 11:03:15.326623 1657368 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 
	I0115 11:03:15.326632 1657368 cni.go:84] Creating CNI manager for ""
	I0115 11:03:15.326639 1657368 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 11:03:15.328706 1657368 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 11:03:15.330632 1657368 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 11:03:15.336674 1657368 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0115 11:03:15.336695 1657368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 11:03:15.359819 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 11:03:15.806541 1657368 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 11:03:15.806688 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:15.806764 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=ingress-addon-legacy-406064 minikube.k8s.io/updated_at=2024_01_15T11_03_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:15.980582 1657368 ops.go:34] apiserver oom_adj: -16
	I0115 11:03:15.980697 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:16.481535 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:16.981356 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:17.480994 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:17.981315 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:18.481494 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:18.981427 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:19.481507 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:19.981557 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:20.480852 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:20.981521 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:21.481069 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:21.981654 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:22.481687 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:22.981463 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:23.481823 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:23.981719 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:24.481462 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:24.980832 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:25.481298 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:25.981200 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:26.480796 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:26.981416 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:27.480916 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:27.981372 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:28.481630 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:28.981483 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:29.480826 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:29.980853 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:30.481289 1657368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:03:30.612329 1657368 kubeadm.go:1088] duration metric: took 14.805693684s to wait for elevateKubeSystemPrivileges.
	I0115 11:03:30.612369 1657368 kubeadm.go:406] StartCluster complete in 35.226857568s
	I0115 11:03:30.612386 1657368 settings.go:142] acquiring lock: {Name:mk05555b5306114ae6571475ccb387a5354ea318 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:03:30.612446 1657368 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:03:30.613152 1657368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/kubeconfig: {Name:mk8fd98ab18475cc98d08290957f6662a0acdd86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:03:30.613857 1657368 kapi.go:59] client config for ingress-addon-legacy-406064: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.key", CAFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9dd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 11:03:30.614468 1657368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 11:03:30.614783 1657368 config.go:182] Loaded profile config "ingress-addon-legacy-406064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0115 11:03:30.615238 1657368 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 11:03:30.615302 1657368 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-406064"
	I0115 11:03:30.615317 1657368 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-406064"
	I0115 11:03:30.615372 1657368 host.go:66] Checking if "ingress-addon-legacy-406064" exists ...
	I0115 11:03:30.615821 1657368 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-406064 --format={{.State.Status}}
	I0115 11:03:30.616465 1657368 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 11:03:30.616831 1657368 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-406064"
	I0115 11:03:30.616845 1657368 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-406064"
	I0115 11:03:30.617237 1657368 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-406064 --format={{.State.Status}}
	I0115 11:03:30.689662 1657368 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 11:03:30.687582 1657368 kapi.go:59] client config for ingress-addon-legacy-406064: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.key", CAFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9dd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 11:03:30.692195 1657368 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-406064"
	I0115 11:03:30.692239 1657368 host.go:66] Checking if "ingress-addon-legacy-406064" exists ...
	I0115 11:03:30.692677 1657368 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-406064 --format={{.State.Status}}
	I0115 11:03:30.692923 1657368 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 11:03:30.692943 1657368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 11:03:30.692986 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:03:30.745686 1657368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34734 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa Username:docker}
	I0115 11:03:30.752112 1657368 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 11:03:30.752148 1657368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 11:03:30.752209 1657368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-406064
	I0115 11:03:30.786293 1657368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34734 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/ingress-addon-legacy-406064/id_rsa Username:docker}
	I0115 11:03:30.901661 1657368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 11:03:30.962896 1657368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 11:03:30.982619 1657368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 11:03:31.299344 1657368 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-406064" context rescaled to 1 replicas
	I0115 11:03:31.299388 1657368 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 11:03:31.301755 1657368 out.go:177] * Verifying Kubernetes components...
	I0115 11:03:31.304694 1657368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:03:31.432518 1657368 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0115 11:03:31.540934 1657368 kapi.go:59] client config for ingress-addon-legacy-406064: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.key", CAFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9dd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 11:03:31.541290 1657368 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-406064" to be "Ready" ...
	I0115 11:03:31.574612 1657368 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0115 11:03:31.576227 1657368 addons.go:505] enable addons completed in 960.981965ms: enabled=[storage-provisioner default-storageclass]
	I0115 11:03:33.544091 1657368 node_ready.go:58] node "ingress-addon-legacy-406064" has status "Ready":"False"
	I0115 11:03:35.544248 1657368 node_ready.go:58] node "ingress-addon-legacy-406064" has status "Ready":"False"
	I0115 11:03:38.044649 1657368 node_ready.go:58] node "ingress-addon-legacy-406064" has status "Ready":"False"
	I0115 11:03:39.044342 1657368 node_ready.go:49] node "ingress-addon-legacy-406064" has status "Ready":"True"
	I0115 11:03:39.044370 1657368 node_ready.go:38] duration metric: took 7.50305564s waiting for node "ingress-addon-legacy-406064" to be "Ready" ...
	I0115 11:03:39.044383 1657368 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:03:39.052108 1657368 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-l2tw7" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:41.055359 1657368 pod_ready.go:102] pod "coredns-66bff467f8-l2tw7" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-15 11:03:30 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0115 11:03:43.055582 1657368 pod_ready.go:102] pod "coredns-66bff467f8-l2tw7" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-15 11:03:30 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0115 11:03:45.058420 1657368 pod_ready.go:102] pod "coredns-66bff467f8-l2tw7" in "kube-system" namespace has status "Ready":"False"
	I0115 11:03:47.558195 1657368 pod_ready.go:102] pod "coredns-66bff467f8-l2tw7" in "kube-system" namespace has status "Ready":"False"
	I0115 11:03:50.057534 1657368 pod_ready.go:92] pod "coredns-66bff467f8-l2tw7" in "kube-system" namespace has status "Ready":"True"
	I0115 11:03:50.057565 1657368 pod_ready.go:81] duration metric: took 11.005421425s waiting for pod "coredns-66bff467f8-l2tw7" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.057584 1657368 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-406064" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.062383 1657368 pod_ready.go:92] pod "etcd-ingress-addon-legacy-406064" in "kube-system" namespace has status "Ready":"True"
	I0115 11:03:50.062409 1657368 pod_ready.go:81] duration metric: took 4.817898ms waiting for pod "etcd-ingress-addon-legacy-406064" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.062430 1657368 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-406064" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.067169 1657368 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-406064" in "kube-system" namespace has status "Ready":"True"
	I0115 11:03:50.067196 1657368 pod_ready.go:81] duration metric: took 4.757485ms waiting for pod "kube-apiserver-ingress-addon-legacy-406064" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.067216 1657368 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-406064" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.072230 1657368 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-406064" in "kube-system" namespace has status "Ready":"True"
	I0115 11:03:50.072256 1657368 pod_ready.go:81] duration metric: took 5.031628ms waiting for pod "kube-controller-manager-ingress-addon-legacy-406064" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.072268 1657368 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7qcgz" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.076957 1657368 pod_ready.go:92] pod "kube-proxy-7qcgz" in "kube-system" namespace has status "Ready":"True"
	I0115 11:03:50.076987 1657368 pod_ready.go:81] duration metric: took 4.710659ms waiting for pod "kube-proxy-7qcgz" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.076998 1657368 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-406064" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.252376 1657368 request.go:629] Waited for 175.223804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-406064
	I0115 11:03:50.452307 1657368 request.go:629] Waited for 197.273245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-406064
	I0115 11:03:50.455045 1657368 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-406064" in "kube-system" namespace has status "Ready":"True"
	I0115 11:03:50.455068 1657368 pod_ready.go:81] duration metric: took 378.061848ms waiting for pod "kube-scheduler-ingress-addon-legacy-406064" in "kube-system" namespace to be "Ready" ...
	I0115 11:03:50.455080 1657368 pod_ready.go:38] duration metric: took 11.410674853s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:03:50.455129 1657368 api_server.go:52] waiting for apiserver process to appear ...
	I0115 11:03:50.455223 1657368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 11:03:50.468132 1657368 api_server.go:72] duration metric: took 19.168665784s to wait for apiserver process to appear ...
	I0115 11:03:50.468156 1657368 api_server.go:88] waiting for apiserver healthz status ...
	I0115 11:03:50.468180 1657368 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0115 11:03:50.477049 1657368 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0115 11:03:50.477971 1657368 api_server.go:141] control plane version: v1.18.20
	I0115 11:03:50.477996 1657368 api_server.go:131] duration metric: took 9.831606ms to wait for apiserver health ...
	I0115 11:03:50.478005 1657368 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 11:03:50.653276 1657368 request.go:629] Waited for 175.208961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0115 11:03:50.658959 1657368 system_pods.go:59] 8 kube-system pods found
	I0115 11:03:50.658998 1657368 system_pods.go:61] "coredns-66bff467f8-l2tw7" [770e71a2-84c0-41f8-a588-914f6551d5e9] Running
	I0115 11:03:50.659005 1657368 system_pods.go:61] "etcd-ingress-addon-legacy-406064" [b389199c-5939-4713-af96-ffb368cc777e] Running
	I0115 11:03:50.659010 1657368 system_pods.go:61] "kindnet-7vw4p" [b1a00e4e-a178-448c-9b00-bf690fba471e] Running
	I0115 11:03:50.659015 1657368 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-406064" [8ae22e3b-bdda-4256-8ec6-bfcc3cd2d460] Running
	I0115 11:03:50.659021 1657368 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-406064" [d61076c2-72fc-4d72-852a-a8561416f3b9] Running
	I0115 11:03:50.659025 1657368 system_pods.go:61] "kube-proxy-7qcgz" [76b1045b-6a91-4f24-ac93-fa80643a8459] Running
	I0115 11:03:50.659031 1657368 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-406064" [c4dd3d55-dfe4-4e3d-abe0-fa869e62a8a0] Running
	I0115 11:03:50.659035 1657368 system_pods.go:61] "storage-provisioner" [d029d94e-46f7-4dd1-ae9c-ae9cf3613c5b] Running
	I0115 11:03:50.659047 1657368 system_pods.go:74] duration metric: took 181.035055ms to wait for pod list to return data ...
	I0115 11:03:50.659058 1657368 default_sa.go:34] waiting for default service account to be created ...
	I0115 11:03:50.852371 1657368 request.go:629] Waited for 193.242979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0115 11:03:50.854777 1657368 default_sa.go:45] found service account: "default"
	I0115 11:03:50.854805 1657368 default_sa.go:55] duration metric: took 195.739843ms for default service account to be created ...
	I0115 11:03:50.854815 1657368 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 11:03:51.053397 1657368 request.go:629] Waited for 198.487598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0115 11:03:51.062909 1657368 system_pods.go:86] 8 kube-system pods found
	I0115 11:03:51.062999 1657368 system_pods.go:89] "coredns-66bff467f8-l2tw7" [770e71a2-84c0-41f8-a588-914f6551d5e9] Running
	I0115 11:03:51.063024 1657368 system_pods.go:89] "etcd-ingress-addon-legacy-406064" [b389199c-5939-4713-af96-ffb368cc777e] Running
	I0115 11:03:51.063072 1657368 system_pods.go:89] "kindnet-7vw4p" [b1a00e4e-a178-448c-9b00-bf690fba471e] Running
	I0115 11:03:51.063099 1657368 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-406064" [8ae22e3b-bdda-4256-8ec6-bfcc3cd2d460] Running
	I0115 11:03:51.063118 1657368 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-406064" [d61076c2-72fc-4d72-852a-a8561416f3b9] Running
	I0115 11:03:51.063156 1657368 system_pods.go:89] "kube-proxy-7qcgz" [76b1045b-6a91-4f24-ac93-fa80643a8459] Running
	I0115 11:03:51.063186 1657368 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-406064" [c4dd3d55-dfe4-4e3d-abe0-fa869e62a8a0] Running
	I0115 11:03:51.063205 1657368 system_pods.go:89] "storage-provisioner" [d029d94e-46f7-4dd1-ae9c-ae9cf3613c5b] Running
	I0115 11:03:51.063243 1657368 system_pods.go:126] duration metric: took 208.420148ms to wait for k8s-apps to be running ...
	I0115 11:03:51.063271 1657368 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 11:03:51.063364 1657368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:03:51.078711 1657368 system_svc.go:56] duration metric: took 15.431227ms WaitForService to wait for kubelet.
	I0115 11:03:51.078794 1657368 kubeadm.go:581] duration metric: took 19.779332249s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 11:03:51.078871 1657368 node_conditions.go:102] verifying NodePressure condition ...
	I0115 11:03:51.253246 1657368 request.go:629] Waited for 174.304722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0115 11:03:51.256022 1657368 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0115 11:03:51.256056 1657368 node_conditions.go:123] node cpu capacity is 2
	I0115 11:03:51.256069 1657368 node_conditions.go:105] duration metric: took 177.191697ms to run NodePressure ...
	I0115 11:03:51.256081 1657368 start.go:228] waiting for startup goroutines ...
	I0115 11:03:51.256087 1657368 start.go:233] waiting for cluster config update ...
	I0115 11:03:51.256101 1657368 start.go:242] writing updated cluster config ...
	I0115 11:03:51.256378 1657368 ssh_runner.go:195] Run: rm -f paused
	I0115 11:03:51.317727 1657368 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0115 11:03:51.320104 1657368 out.go:177] 
	W0115 11:03:51.321896 1657368 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0115 11:03:51.323758 1657368 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0115 11:03:51.325792 1657368 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-406064" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 15 11:06:56 ingress-addon-legacy-406064 conmon[3652]: conmon c239f1a605672bf6b1fc <ninfo>: container 3663 exited with status 1
	Jan 15 11:06:56 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:06:56.791496958Z" level=info msg="Started container" PID=3663 containerID=c239f1a605672bf6b1fc8461051aa2c79624e8390a9995e4afc4b68581408c0f description=default/hello-world-app-5f5d8b66bb-4n55n/hello-world-app id=e3a3f020-dfad-4f8e-8f6c-90db778f4df7 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=63d8a0eee1131fe434019af4eb3452c3d8fa0f6b859622efb1cda406cc353c85
	Jan 15 11:06:57 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:06:57.313944039Z" level=info msg="Removing container: 4db75bd226092a3e3d975ec7158f3328b148eebb57544b8fae2f802a74596daa" id=63a05ee8-4282-4926-9fe6-cef949891a43 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 15 11:06:57 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:06:57.335228122Z" level=info msg="Removed container 4db75bd226092a3e3d975ec7158f3328b148eebb57544b8fae2f802a74596daa: default/hello-world-app-5f5d8b66bb-4n55n/hello-world-app" id=63a05ee8-4282-4926-9fe6-cef949891a43 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 15 11:06:57 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:06:57.352089319Z" level=info msg="Stopping pod sandbox: 13f7be1979fdfbefe1afd637023e2d7a5d53c3deebf4277bc2048bdaacdc7f42" id=21920c7f-bcc4-45ed-a100-ace369f1346e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 11:06:57 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:06:57.352137039Z" level=info msg="Stopped pod sandbox (already stopped): 13f7be1979fdfbefe1afd637023e2d7a5d53c3deebf4277bc2048bdaacdc7f42" id=21920c7f-bcc4-45ed-a100-ace369f1346e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 11:06:58 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:06:58.254204606Z" level=info msg="Stopping container: c97de19f57d32bd21c11f3791a2cadcee3b45fed989cf8c5a2b49462da66b110 (timeout: 2s)" id=d138bafc-080c-468d-8cfe-2520505d6590 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 11:06:58 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:06:58.261842928Z" level=info msg="Stopping container: c97de19f57d32bd21c11f3791a2cadcee3b45fed989cf8c5a2b49462da66b110 (timeout: 2s)" id=ddf559e4-9aa5-46fd-8242-fa29c853f186 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 11:06:58 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:06:58.699487244Z" level=info msg="Stopping pod sandbox: 13f7be1979fdfbefe1afd637023e2d7a5d53c3deebf4277bc2048bdaacdc7f42" id=f4613ea9-6d1f-4cb4-89a4-5ef4bf5750d5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 11:06:58 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:06:58.699532231Z" level=info msg="Stopped pod sandbox (already stopped): 13f7be1979fdfbefe1afd637023e2d7a5d53c3deebf4277bc2048bdaacdc7f42" id=f4613ea9-6d1f-4cb4-89a4-5ef4bf5750d5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.269166861Z" level=warning msg="Stopping container c97de19f57d32bd21c11f3791a2cadcee3b45fed989cf8c5a2b49462da66b110 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=d138bafc-080c-468d-8cfe-2520505d6590 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 11:07:00 ingress-addon-legacy-406064 conmon[2749]: conmon c97de19f57d32bd21c11 <ninfo>: container 2761 exited with status 137
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.433830090Z" level=info msg="Stopped container c97de19f57d32bd21c11f3791a2cadcee3b45fed989cf8c5a2b49462da66b110: ingress-nginx/ingress-nginx-controller-7fcf777cb7-2jpx2/controller" id=d138bafc-080c-468d-8cfe-2520505d6590 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.434149721Z" level=info msg="Stopped container c97de19f57d32bd21c11f3791a2cadcee3b45fed989cf8c5a2b49462da66b110: ingress-nginx/ingress-nginx-controller-7fcf777cb7-2jpx2/controller" id=ddf559e4-9aa5-46fd-8242-fa29c853f186 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.434539307Z" level=info msg="Stopping pod sandbox: 720da7cbda550a2fff892b4b44bfcdec28a3cdc29f7aa6454da7385581c78eea" id=f71a3caf-c2f1-4e3d-966d-d22bd31fa39d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.434846491Z" level=info msg="Stopping pod sandbox: 720da7cbda550a2fff892b4b44bfcdec28a3cdc29f7aa6454da7385581c78eea" id=3028d14d-009e-4c96-94b6-bfac027c0dea name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.438090118Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-3LZ55ILYTMQALRNN - [0:0]\n:KUBE-HP-X6ZCOP3FCNMRDIXP - [0:0]\n-X KUBE-HP-3LZ55ILYTMQALRNN\n-X KUBE-HP-X6ZCOP3FCNMRDIXP\nCOMMIT\n"
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.439666932Z" level=info msg="Closing host port tcp:80"
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.439715169Z" level=info msg="Closing host port tcp:443"
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.440917428Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.440943282Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.441090363Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-2jpx2 Namespace:ingress-nginx ID:720da7cbda550a2fff892b4b44bfcdec28a3cdc29f7aa6454da7385581c78eea UID:b17d1032-4389-4a42-8a98-d1e221688040 NetNS:/var/run/netns/9a32aa2b-5e2f-40d9-8365-dad3126e6fff Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.441228632Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-2jpx2 from CNI network \"kindnet\" (type=ptp)"
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.467938278Z" level=info msg="Stopped pod sandbox: 720da7cbda550a2fff892b4b44bfcdec28a3cdc29f7aa6454da7385581c78eea" id=f71a3caf-c2f1-4e3d-966d-d22bd31fa39d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 15 11:07:00 ingress-addon-legacy-406064 crio[899]: time="2024-01-15 11:07:00.468055436Z" level=info msg="Stopped pod sandbox (already stopped): 720da7cbda550a2fff892b4b44bfcdec28a3cdc29f7aa6454da7385581c78eea" id=3028d14d-009e-4c96-94b6-bfac027c0dea name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c239f1a605672       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   9 seconds ago       Exited              hello-world-app           2                   63d8a0eee1131       hello-world-app-5f5d8b66bb-4n55n
	35f1b801cc065       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                    2 minutes ago       Running             nginx                     0                   cac550e53028e       nginx
	c97de19f57d32       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   720da7cbda550       ingress-nginx-controller-7fcf777cb7-2jpx2
	1ecfec785d709       a883f7fc35610a84d589cbb450eade9face1d1a8b2cbdafa1690cbffe68cfe88                                                   3 minutes ago       Exited              patch                     1                   dada135940eca       ingress-nginx-admission-patch-zrhk7
	50ced05a469f0       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   511466f0fd577       ingress-nginx-admission-create-wlfmf
	1c405f2ed905f       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   46b38f0262b58       coredns-66bff467f8-l2tw7
	17f8d68600a64       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   0b7b2557d2de2       storage-provisioner
	ca98fc83f9170       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   fc421849c8909       kindnet-7vw4p
	fbc51dc54ecb9       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   a9fa8da37c076       kube-proxy-7qcgz
	4d78c1df6452c       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   02eac5fb96ec0       kube-scheduler-ingress-addon-legacy-406064
	a702046bed4f4       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   84bc9b2fa4eb5       kube-apiserver-ingress-addon-legacy-406064
	35be5f83f36c2       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   41143d67b16fd       kube-controller-manager-ingress-addon-legacy-406064
	88983f285aba8       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   c0d70073d1979       etcd-ingress-addon-legacy-406064
	
	
	==> coredns [1c405f2ed905fc1da6db30a85fc22dfed3b76074a819747d9cc9fbacfe050a08] <==
	[INFO] 10.244.0.5:47171 - 20302 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030055s
	[INFO] 10.244.0.5:47171 - 50250 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001750552s
	[INFO] 10.244.0.5:43290 - 11102 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002055251s
	[INFO] 10.244.0.5:47171 - 40387 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001767069s
	[INFO] 10.244.0.5:43290 - 27328 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001911813s
	[INFO] 10.244.0.5:43290 - 33601 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000116879s
	[INFO] 10.244.0.5:47171 - 8837 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000028389s
	[INFO] 10.244.0.5:55826 - 29798 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076938s
	[INFO] 10.244.0.5:36150 - 773 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040254s
	[INFO] 10.244.0.5:55826 - 46669 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033861s
	[INFO] 10.244.0.5:55826 - 12771 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031326s
	[INFO] 10.244.0.5:55826 - 5064 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032286s
	[INFO] 10.244.0.5:55826 - 61073 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030555s
	[INFO] 10.244.0.5:55826 - 4337 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030957s
	[INFO] 10.244.0.5:36150 - 33688 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038924s
	[INFO] 10.244.0.5:36150 - 25560 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044774s
	[INFO] 10.244.0.5:36150 - 260 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032877s
	[INFO] 10.244.0.5:36150 - 16055 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034969s
	[INFO] 10.244.0.5:36150 - 45800 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034042s
	[INFO] 10.244.0.5:55826 - 40620 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00136159s
	[INFO] 10.244.0.5:55826 - 37818 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001395297s
	[INFO] 10.244.0.5:36150 - 40387 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001686407s
	[INFO] 10.244.0.5:55826 - 687 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048155s
	[INFO] 10.244.0.5:36150 - 50646 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001046087s
	[INFO] 10.244.0.5:36150 - 43507 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052971s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-406064
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-406064
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=ingress-addon-legacy-406064
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T11_03_15_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 11:03:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-406064
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 11:06:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 11:06:48 +0000   Mon, 15 Jan 2024 11:03:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 11:06:48 +0000   Mon, 15 Jan 2024 11:03:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 11:06:48 +0000   Mon, 15 Jan 2024 11:03:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 11:06:48 +0000   Mon, 15 Jan 2024 11:03:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-406064
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e3f706bbb234a3b9533946c2d95390c
	  System UUID:                fc0f406e-2a22-4802-bb22-45b83086e06b
	  Boot ID:                    2320f45f-1c30-479b-83e7-a1d3daee01d1
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-4n55n                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-66bff467f8-l2tw7                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m36s
	  kube-system                 etcd-ingress-addon-legacy-406064                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kindnet-7vw4p                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m36s
	  kube-system                 kube-apiserver-ingress-addon-legacy-406064             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-406064    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-proxy-7qcgz                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-scheduler-ingress-addon-legacy-406064             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m2s (x5 over 4m2s)  kubelet     Node ingress-addon-legacy-406064 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x4 over 4m2s)  kubelet     Node ingress-addon-legacy-406064 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x4 over 4m2s)  kubelet     Node ingress-addon-legacy-406064 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m48s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s                kubelet     Node ingress-addon-legacy-406064 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s                kubelet     Node ingress-addon-legacy-406064 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s                kubelet     Node ingress-addon-legacy-406064 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m35s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m28s                kubelet     Node ingress-addon-legacy-406064 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001143] FS-Cache: O-key=[8] '83663b0000000000'
	[  +0.000776] FS-Cache: N-cookie c=00000078 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001066] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=000000002dc74ee5
	[  +0.001134] FS-Cache: N-key=[8] '83663b0000000000'
	[  +0.003126] FS-Cache: Duplicate cookie detected
	[  +0.000775] FS-Cache: O-cookie c=00000072 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001074] FS-Cache: O-cookie d=00000000d00daa15{9p.inode} n=00000000fe92a6ba
	[  +0.001159] FS-Cache: O-key=[8] '83663b0000000000'
	[  +0.000832] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001030] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=00000000739a828d
	[  +0.001132] FS-Cache: N-key=[8] '83663b0000000000'
	[  +3.222739] FS-Cache: Duplicate cookie detected
	[  +0.000781] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001104] FS-Cache: O-cookie d=00000000d00daa15{9p.inode} n=0000000053a68b73
	[  +0.001216] FS-Cache: O-key=[8] '82663b0000000000'
	[  +0.000802] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.001049] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=000000002dc74ee5
	[  +0.001173] FS-Cache: N-key=[8] '82663b0000000000'
	[  +0.302135] FS-Cache: Duplicate cookie detected
	[  +0.000774] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001061] FS-Cache: O-cookie d=00000000d00daa15{9p.inode} n=00000000dec4625c
	[  +0.001206] FS-Cache: O-key=[8] '89663b0000000000'
	[  +0.000768] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.001039] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=00000000142868f2
	[  +0.001158] FS-Cache: N-key=[8] '89663b0000000000'
	
	
	==> etcd [88983f285aba8e65fb3dd694b8addb4ef819329659ef69060938d599b31173fd] <==
	raft2024/01/15 11:03:05 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/15 11:03:05 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-15 11:03:05.909618 W | auth: simple token is not cryptographically signed
	2024-01-15 11:03:05.913054 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-15 11:03:05.915992 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/15 11:03:05 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-15 11:03:05.916514 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-15 11:03:05.918933 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-15 11:03:05.919209 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-15 11:03:05.919299 I | embed: listening for peers on 192.168.49.2:2380
	raft2024/01/15 11:03:06 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/15 11:03:06 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/15 11:03:06 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/15 11:03:06 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/15 11:03:06 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-15 11:03:06.434331 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-15 11:03:06.458312 I | embed: ready to serve client requests
	2024-01-15 11:03:06.478327 I | embed: ready to serve client requests
	2024-01-15 11:03:06.495562 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-15 11:03:06.671020 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-15 11:03:06.671104 I | etcdserver: published {Name:ingress-addon-legacy-406064 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-15 11:03:06.922345 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-15 11:03:06.922494 W | etcdserver: request "ID:8128026513566182916 Method:\"PUT\" Path:\"/0/version\" Val:\"3.4.0\" " with result "" took too long (416.019977ms) to execute
	2024-01-15 11:03:07.039583 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-15 11:03:31.295572 W | etcdserver: read-only range request "key:\"/registry/endpointslices/kube-system/kube-dns-2gwdg\" " with result "range_response_count:1 size:849" took too long (136.196461ms) to execute
	
	
	==> kernel <==
	 11:07:06 up  9:49,  0 users,  load average: 0.72, 1.09, 1.63
	Linux ingress-addon-legacy-406064 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [ca98fc83f917017f55a99e42656c4589849b520f2128ec7c757ede7974446064] <==
	I0115 11:05:03.390845       1 main.go:227] handling current node
	I0115 11:05:13.397589       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:05:13.397617       1 main.go:227] handling current node
	I0115 11:05:23.401694       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:05:23.401721       1 main.go:227] handling current node
	I0115 11:05:33.405476       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:05:33.405507       1 main.go:227] handling current node
	I0115 11:05:43.408773       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:05:43.408804       1 main.go:227] handling current node
	I0115 11:05:53.416503       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:05:53.416533       1 main.go:227] handling current node
	I0115 11:06:03.427035       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:06:03.427062       1 main.go:227] handling current node
	I0115 11:06:13.435714       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:06:13.435743       1 main.go:227] handling current node
	I0115 11:06:23.446685       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:06:23.446711       1 main.go:227] handling current node
	I0115 11:06:33.449922       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:06:33.449950       1 main.go:227] handling current node
	I0115 11:06:43.472433       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:06:43.472532       1 main.go:227] handling current node
	I0115 11:06:53.476574       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:06:53.476601       1 main.go:227] handling current node
	I0115 11:07:03.479925       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 11:07:03.479956       1 main.go:227] handling current node
	
	
	==> kube-apiserver [a702046bed4f4d2083d7a0a5bb67a0babf7c78f8802e432159d4fe014a19b10c] <==
	I0115 11:03:12.442684       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0115 11:03:12.494607       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0115 11:03:12.498916       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0115 11:03:12.502953       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0115 11:03:12.503079       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0115 11:03:12.503146       1 cache.go:39] Caches are synced for autoregister controller
	I0115 11:03:12.553653       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0115 11:03:13.292844       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0115 11:03:13.292961       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0115 11:03:13.297744       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0115 11:03:13.301792       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0115 11:03:13.301870       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0115 11:03:13.679904       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0115 11:03:13.715950       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0115 11:03:13.789560       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0115 11:03:13.790698       1 controller.go:609] quota admission added evaluator for: endpoints
	I0115 11:03:13.794250       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0115 11:03:14.740093       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0115 11:03:15.200913       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0115 11:03:15.296153       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0115 11:03:18.637771       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0115 11:03:30.158389       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0115 11:03:30.190250       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0115 11:03:52.215235       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0115 11:04:20.287088       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [35be5f83f36c2fd1a957cd693a03a7b664281088af97a629fa2f2ac12cf0537a] <==
	I0115 11:03:30.283716       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"3c071b28-d24f-4d50-9750-350082190154", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-7vw4p
	E0115 11:03:30.322755       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"99c0e662-696b-4fb3-8a5b-bee39fba4396", ResourceVersion:"213", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63840913395, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194cb00), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x400194cb20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400194cb40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001151e40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x400194cb60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194cb80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400194cbc0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40019dc8c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40019fe0b8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40002d8000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000ff20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40019fe108)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0115 11:03:30.376709       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0115 11:03:30.440541       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0115 11:03:30.555928       1 shared_informer.go:230] Caches are synced for endpoint 
	I0115 11:03:30.570056       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0115 11:03:30.590632       1 shared_informer.go:230] Caches are synced for resource quota 
	I0115 11:03:30.642636       1 shared_informer.go:230] Caches are synced for resource quota 
	I0115 11:03:30.656540       1 shared_informer.go:230] Caches are synced for attach detach 
	I0115 11:03:30.741922       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0115 11:03:30.741945       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0115 11:03:30.775112       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0115 11:03:30.802922       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0115 11:03:30.860718       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e59c21e6-5788-4094-90b9-267da38736ef", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0115 11:03:31.112786       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"9aa1f1a8-b5bd-4316-b9ff-ae68cf190bf1", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-w8ndg
	I0115 11:03:40.191965       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0115 11:03:52.193880       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c7264cc3-a84a-4086-9cce-880e96b1bbf4", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0115 11:03:52.238178       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9776ba1e-ea52-4e76-81b9-de239f2a555d", APIVersion:"apps/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-2jpx2
	I0115 11:03:52.277161       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5fc5dd79-efac-460e-8f2f-837fa742eaf1", APIVersion:"batch/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-zrhk7
	I0115 11:03:52.302118       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"66bef5a2-b191-44fc-8942-5e4417fce412", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-wlfmf
	I0115 11:03:54.979088       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"66bef5a2-b191-44fc-8942-5e4417fce412", APIVersion:"batch/v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0115 11:03:55.950885       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5fc5dd79-efac-460e-8f2f-837fa742eaf1", APIVersion:"batch/v1", ResourceVersion:"504", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0115 11:06:39.981658       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"73a3c23b-7d67-44a2-a323-1e55f03d06b9", APIVersion:"apps/v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0115 11:06:40.009854       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"b9d8c1d7-48c5-4197-b5cf-81f048def633", APIVersion:"apps/v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-4n55n
	
	
	==> kube-proxy [fbc51dc54ecb9b0fbf067af996c2b3497e44a93be0546d728d45de4359adc2f6] <==
	W0115 11:03:31.484688       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0115 11:03:31.502779       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0115 11:03:31.502838       1 server_others.go:186] Using iptables Proxier.
	I0115 11:03:31.503419       1 server.go:583] Version: v1.18.20
	I0115 11:03:31.504774       1 config.go:315] Starting service config controller
	I0115 11:03:31.504890       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0115 11:03:31.505053       1 config.go:133] Starting endpoints config controller
	I0115 11:03:31.505115       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0115 11:03:31.605170       1 shared_informer.go:230] Caches are synced for service config 
	I0115 11:03:31.605321       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [4d78c1df6452cd5d1654746f6debed47dc87875c325fb25f36ceb4d7fbe868ad] <==
	W0115 11:03:12.482812       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0115 11:03:12.482917       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 11:03:12.482949       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0115 11:03:12.482990       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0115 11:03:12.550377       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0115 11:03:12.550467       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0115 11:03:12.554321       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0115 11:03:12.554493       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 11:03:12.554528       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 11:03:12.557894       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0115 11:03:12.590677       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 11:03:12.591050       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 11:03:12.591176       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 11:03:12.591271       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 11:03:12.591369       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 11:03:12.591464       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 11:03:12.591582       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 11:03:12.591675       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 11:03:12.591776       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 11:03:12.591866       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 11:03:12.591962       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 11:03:12.592057       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 11:03:13.512917       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0115 11:03:15.858021       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0115 11:03:31.557818       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	
	==> kubelet <==
	Jan 15 11:06:44 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:06:44.291586    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: f4ff614a6af9efc8c74d3c08b9326e4ce29c7e26ad37e04a7d4ebc2c3dfcfd49
	Jan 15 11:06:44 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:06:44.291761    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4db75bd226092a3e3d975ec7158f3328b148eebb57544b8fae2f802a74596daa
	Jan 15 11:06:44 ingress-addon-legacy-406064 kubelet[1618]: E0115 11:06:44.292006    1618 pod_workers.go:191] Error syncing pod 528971be-03de-405b-b789-d5ea6f03ab54 ("hello-world-app-5f5d8b66bb-4n55n_default(528971be-03de-405b-b789-d5ea6f03ab54)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-4n55n_default(528971be-03de-405b-b789-d5ea6f03ab54)"
	Jan 15 11:06:45 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:06:45.294832    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4db75bd226092a3e3d975ec7158f3328b148eebb57544b8fae2f802a74596daa
	Jan 15 11:06:45 ingress-addon-legacy-406064 kubelet[1618]: E0115 11:06:45.295093    1618 pod_workers.go:191] Error syncing pod 528971be-03de-405b-b789-d5ea6f03ab54 ("hello-world-app-5f5d8b66bb-4n55n_default(528971be-03de-405b-b789-d5ea6f03ab54)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-4n55n_default(528971be-03de-405b-b789-d5ea6f03ab54)"
	Jan 15 11:06:50 ingress-addon-legacy-406064 kubelet[1618]: E0115 11:06:50.700468    1618 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 15 11:06:50 ingress-addon-legacy-406064 kubelet[1618]: E0115 11:06:50.700503    1618 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 15 11:06:50 ingress-addon-legacy-406064 kubelet[1618]: E0115 11:06:50.700542    1618 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 15 11:06:50 ingress-addon-legacy-406064 kubelet[1618]: E0115 11:06:50.700576    1618 pod_workers.go:191] Error syncing pod a325b8f4-38f3-4742-b4f8-b248a71f315a ("kube-ingress-dns-minikube_kube-system(a325b8f4-38f3-4742-b4f8-b248a71f315a)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 15 11:06:56 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:06:56.071501    1618 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-ndtjk" (UniqueName: "kubernetes.io/secret/a325b8f4-38f3-4742-b4f8-b248a71f315a-minikube-ingress-dns-token-ndtjk") pod "a325b8f4-38f3-4742-b4f8-b248a71f315a" (UID: "a325b8f4-38f3-4742-b4f8-b248a71f315a")
	Jan 15 11:06:56 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:06:56.075936    1618 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a325b8f4-38f3-4742-b4f8-b248a71f315a-minikube-ingress-dns-token-ndtjk" (OuterVolumeSpecName: "minikube-ingress-dns-token-ndtjk") pod "a325b8f4-38f3-4742-b4f8-b248a71f315a" (UID: "a325b8f4-38f3-4742-b4f8-b248a71f315a"). InnerVolumeSpecName "minikube-ingress-dns-token-ndtjk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 11:06:56 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:06:56.171976    1618 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-ndtjk" (UniqueName: "kubernetes.io/secret/a325b8f4-38f3-4742-b4f8-b248a71f315a-minikube-ingress-dns-token-ndtjk") on node "ingress-addon-legacy-406064" DevicePath ""
	Jan 15 11:06:56 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:06:56.699539    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4db75bd226092a3e3d975ec7158f3328b148eebb57544b8fae2f802a74596daa
	Jan 15 11:06:57 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:06:57.312033    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4db75bd226092a3e3d975ec7158f3328b148eebb57544b8fae2f802a74596daa
	Jan 15 11:06:57 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:06:57.312274    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c239f1a605672bf6b1fc8461051aa2c79624e8390a9995e4afc4b68581408c0f
	Jan 15 11:06:57 ingress-addon-legacy-406064 kubelet[1618]: E0115 11:06:57.312513    1618 pod_workers.go:191] Error syncing pod 528971be-03de-405b-b789-d5ea6f03ab54 ("hello-world-app-5f5d8b66bb-4n55n_default(528971be-03de-405b-b789-d5ea6f03ab54)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-4n55n_default(528971be-03de-405b-b789-d5ea6f03ab54)"
	Jan 15 11:06:58 ingress-addon-legacy-406064 kubelet[1618]: E0115 11:06:58.256689    1618 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-2jpx2.17aa809c6d811f83", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-2jpx2", UID:"b17d1032-4389-4a42-8a98-d1e221688040", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-406064"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16162548f1d6b83, ext:223128283086, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16162548f1d6b83, ext:223128283086, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-2jpx2.17aa809c6d811f83" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 15 11:06:58 ingress-addon-legacy-406064 kubelet[1618]: E0115 11:06:58.272163    1618 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-2jpx2.17aa809c6d811f83", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-2jpx2", UID:"b17d1032-4389-4a42-8a98-d1e221688040", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-406064"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16162548f1d6b83, ext:223128283086, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16162548f924bea, ext:223135942709, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-2jpx2.17aa809c6d811f83" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 15 11:07:01 ingress-addon-legacy-406064 kubelet[1618]: W0115 11:07:01.322660    1618 pod_container_deletor.go:77] Container "720da7cbda550a2fff892b4b44bfcdec28a3cdc29f7aa6454da7385581c78eea" not found in pod's containers
	Jan 15 11:07:02 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:07:02.390560    1618 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-twqnl" (UniqueName: "kubernetes.io/secret/b17d1032-4389-4a42-8a98-d1e221688040-ingress-nginx-token-twqnl") pod "b17d1032-4389-4a42-8a98-d1e221688040" (UID: "b17d1032-4389-4a42-8a98-d1e221688040")
	Jan 15 11:07:02 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:07:02.390614    1618 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b17d1032-4389-4a42-8a98-d1e221688040-webhook-cert") pod "b17d1032-4389-4a42-8a98-d1e221688040" (UID: "b17d1032-4389-4a42-8a98-d1e221688040")
	Jan 15 11:07:02 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:07:02.396820    1618 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17d1032-4389-4a42-8a98-d1e221688040-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b17d1032-4389-4a42-8a98-d1e221688040" (UID: "b17d1032-4389-4a42-8a98-d1e221688040"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 11:07:02 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:07:02.397384    1618 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17d1032-4389-4a42-8a98-d1e221688040-ingress-nginx-token-twqnl" (OuterVolumeSpecName: "ingress-nginx-token-twqnl") pod "b17d1032-4389-4a42-8a98-d1e221688040" (UID: "b17d1032-4389-4a42-8a98-d1e221688040"). InnerVolumeSpecName "ingress-nginx-token-twqnl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 11:07:02 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:07:02.490988    1618 reconciler.go:319] Volume detached for volume "ingress-nginx-token-twqnl" (UniqueName: "kubernetes.io/secret/b17d1032-4389-4a42-8a98-d1e221688040-ingress-nginx-token-twqnl") on node "ingress-addon-legacy-406064" DevicePath ""
	Jan 15 11:07:02 ingress-addon-legacy-406064 kubelet[1618]: I0115 11:07:02.491045    1618 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b17d1032-4389-4a42-8a98-d1e221688040-webhook-cert") on node "ingress-addon-legacy-406064" DevicePath ""
	
	
	==> storage-provisioner [17f8d68600a64c112124000d4256d94b4d960ee092f9845750e2932ad9b0ac52] <==
	I0115 11:03:43.607762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 11:03:43.629988       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 11:03:43.630164       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 11:03:43.656110       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 11:03:43.656405       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-406064_dbc1fe24-b4af-44e3-aa31-aac153aa8d3c!
	I0115 11:03:43.659003       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0568dadb-4a2f-49b9-9c7e-811c55098f4b", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-406064_dbc1fe24-b4af-44e3-aa31-aac153aa8d3c became leader
	I0115 11:03:43.758393       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-406064_dbc1fe24-b4af-44e3-aa31-aac153aa8d3c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-406064 -n ingress-addon-legacy-406064
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-406064 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (183.29s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-drm6d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-drm6d -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-drm6d -- sh -c "ping -c 1 192.168.58.1": exit status 1 (222.47329ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-drm6d): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-nn8t2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-nn8t2 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-nn8t2 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (223.758738ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-nn8t2): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-279658
helpers_test.go:235: (dbg) docker inspect multinode-279658:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5",
	        "Created": "2024-01-15T11:13:02.92187014Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1694166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T11:13:03.251847937Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5/hostname",
	        "HostsPath": "/var/lib/docker/containers/a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5/hosts",
	        "LogPath": "/var/lib/docker/containers/a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5/a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5-json.log",
	        "Name": "/multinode-279658",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-279658:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-279658",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/61e87476978e8b31cd215b118d38ae14112abab17fddc6f235e8db712af7ccec-init/diff:/var/lib/docker/overlay2/875764cb66056ccf89d3b82171ed27a7d9d817926a8469405b5a9bf1621232cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/61e87476978e8b31cd215b118d38ae14112abab17fddc6f235e8db712af7ccec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/61e87476978e8b31cd215b118d38ae14112abab17fddc6f235e8db712af7ccec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/61e87476978e8b31cd215b118d38ae14112abab17fddc6f235e8db712af7ccec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-279658",
	                "Source": "/var/lib/docker/volumes/multinode-279658/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-279658",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-279658",
	                "name.minikube.sigs.k8s.io": "multinode-279658",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "181101e52a664f664e447026e3a0bb63677d499a473fdea3d5ddf28242be73cf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34794"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34790"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34792"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34791"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/181101e52a66",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-279658": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a18a2b3c9b56",
	                        "multinode-279658"
	                    ],
	                    "NetworkID": "3f970086cb881d17bb1f492f80b61138728fc6557c769f9f49dea50afc93434a",
	                    "EndpointID": "09035f64bd0586f22d5a0102894a8687e6413719ef5addf1061edde913052a1c",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-279658 -n multinode-279658
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-279658 logs -n 25: (1.615748124s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-681675                           | mount-start-2-681675 | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:12 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-681675 ssh -- ls                    | mount-start-2-681675 | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:12 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-679878                           | mount-start-1-679878 | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:12 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-681675 ssh -- ls                    | mount-start-2-681675 | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:12 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-681675                           | mount-start-2-681675 | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:12 UTC |
	| start   | -p mount-start-2-681675                           | mount-start-2-681675 | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:12 UTC |
	| ssh     | mount-start-2-681675 ssh -- ls                    | mount-start-2-681675 | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:12 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-681675                           | mount-start-2-681675 | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:12 UTC |
	| delete  | -p mount-start-1-679878                           | mount-start-1-679878 | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:12 UTC |
	| start   | -p multinode-279658                               | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:12 UTC | 15 Jan 24 11:15 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- apply -f                   | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- rollout                    | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- get pods -o                | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- get pods -o                | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | busybox-5bc68d56bd-drm6d --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | busybox-5bc68d56bd-nn8t2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | busybox-5bc68d56bd-drm6d --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | busybox-5bc68d56bd-nn8t2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | busybox-5bc68d56bd-drm6d -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | busybox-5bc68d56bd-nn8t2 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- get pods -o                | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | busybox-5bc68d56bd-drm6d                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC |                     |
	|         | busybox-5bc68d56bd-drm6d -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC | 15 Jan 24 11:15 UTC |
	|         | busybox-5bc68d56bd-nn8t2                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-279658 -- exec                       | multinode-279658     | jenkins | v1.32.0 | 15 Jan 24 11:15 UTC |                     |
	|         | busybox-5bc68d56bd-nn8t2 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 11:12:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 11:12:57.355148 1693723 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:12:57.355304 1693723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:12:57.355314 1693723 out.go:309] Setting ErrFile to fd 2...
	I0115 11:12:57.355321 1693723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:12:57.355616 1693723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 11:12:57.356125 1693723 out.go:303] Setting JSON to false
	I0115 11:12:57.357077 1693723 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35719,"bootTime":1705281458,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0115 11:12:57.357163 1693723 start.go:138] virtualization:  
	I0115 11:12:57.360432 1693723 out.go:177] * [multinode-279658] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 11:12:57.363772 1693723 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 11:12:57.364021 1693723 notify.go:220] Checking for updates...
	I0115 11:12:57.368921 1693723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:12:57.371601 1693723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:12:57.374182 1693723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	I0115 11:12:57.376684 1693723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 11:12:57.379078 1693723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:12:57.381800 1693723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:12:57.406925 1693723 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:12:57.407045 1693723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:12:57.493267 1693723 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-15 11:12:57.483608734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 11:12:57.493377 1693723 docker.go:295] overlay module found
	I0115 11:12:57.496060 1693723 out.go:177] * Using the docker driver based on user configuration
	I0115 11:12:57.498713 1693723 start.go:298] selected driver: docker
	I0115 11:12:57.498728 1693723 start.go:902] validating driver "docker" against <nil>
	I0115 11:12:57.498741 1693723 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:12:57.499371 1693723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:12:57.567216 1693723 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-15 11:12:57.55706832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 11:12:57.567383 1693723 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 11:12:57.567678 1693723 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 11:12:57.570273 1693723 out.go:177] * Using Docker driver with root privileges
	I0115 11:12:57.572778 1693723 cni.go:84] Creating CNI manager for ""
	I0115 11:12:57.572797 1693723 cni.go:136] 0 nodes found, recommending kindnet
	I0115 11:12:57.572814 1693723 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 11:12:57.572828 1693723 start_flags.go:321] config:
	{Name:multinode-279658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-279658 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:12:57.575829 1693723 out.go:177] * Starting control plane node multinode-279658 in cluster multinode-279658
	I0115 11:12:57.578397 1693723 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 11:12:57.581206 1693723 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 11:12:57.583785 1693723 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 11:12:57.583832 1693723 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0115 11:12:57.583843 1693723 cache.go:56] Caching tarball of preloaded images
	I0115 11:12:57.583902 1693723 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 11:12:57.583922 1693723 preload.go:174] Found /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0115 11:12:57.583932 1693723 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 11:12:57.584286 1693723 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/config.json ...
	I0115 11:12:57.584343 1693723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/config.json: {Name:mk3b929cad0aaabe7a4af1352041e2ed405b015e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:12:57.601190 1693723 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 11:12:57.601218 1693723 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 11:12:57.601239 1693723 cache.go:194] Successfully downloaded all kic artifacts
	I0115 11:12:57.601314 1693723 start.go:365] acquiring machines lock for multinode-279658: {Name:mk2667e3c1bd1e4515901e0d4c99a025007bd768 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:12:57.601447 1693723 start.go:369] acquired machines lock for "multinode-279658" in 106.27µs
	I0115 11:12:57.601483 1693723 start.go:93] Provisioning new machine with config: &{Name:multinode-279658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-279658 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 11:12:57.601563 1693723 start.go:125] createHost starting for "" (driver="docker")
	I0115 11:12:57.604711 1693723 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 11:12:57.604947 1693723 start.go:159] libmachine.API.Create for "multinode-279658" (driver="docker")
	I0115 11:12:57.605004 1693723 client.go:168] LocalClient.Create starting
	I0115 11:12:57.605095 1693723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem
	I0115 11:12:57.605134 1693723 main.go:141] libmachine: Decoding PEM data...
	I0115 11:12:57.605152 1693723 main.go:141] libmachine: Parsing certificate...
	I0115 11:12:57.605212 1693723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem
	I0115 11:12:57.605236 1693723 main.go:141] libmachine: Decoding PEM data...
	I0115 11:12:57.605250 1693723 main.go:141] libmachine: Parsing certificate...
	I0115 11:12:57.605610 1693723 cli_runner.go:164] Run: docker network inspect multinode-279658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 11:12:57.623181 1693723 cli_runner.go:211] docker network inspect multinode-279658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 11:12:57.623261 1693723 network_create.go:281] running [docker network inspect multinode-279658] to gather additional debugging logs...
	I0115 11:12:57.623283 1693723 cli_runner.go:164] Run: docker network inspect multinode-279658
	W0115 11:12:57.639822 1693723 cli_runner.go:211] docker network inspect multinode-279658 returned with exit code 1
	I0115 11:12:57.639854 1693723 network_create.go:284] error running [docker network inspect multinode-279658]: docker network inspect multinode-279658: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-279658 not found
	I0115 11:12:57.639870 1693723 network_create.go:286] output of [docker network inspect multinode-279658]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-279658 not found
	
	** /stderr **
	I0115 11:12:57.639972 1693723 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 11:12:57.656892 1693723 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9d6252041710 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f4:cb:32:62} reservation:<nil>}
	I0115 11:12:57.657253 1693723 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024abc00}
	I0115 11:12:57.657275 1693723 network_create.go:124] attempt to create docker network multinode-279658 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0115 11:12:57.657336 1693723 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-279658 multinode-279658
	I0115 11:12:57.729958 1693723 network_create.go:108] docker network multinode-279658 192.168.58.0/24 created
	I0115 11:12:57.729995 1693723 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-279658" container
	I0115 11:12:57.730082 1693723 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 11:12:57.747416 1693723 cli_runner.go:164] Run: docker volume create multinode-279658 --label name.minikube.sigs.k8s.io=multinode-279658 --label created_by.minikube.sigs.k8s.io=true
	I0115 11:12:57.765721 1693723 oci.go:103] Successfully created a docker volume multinode-279658
	I0115 11:12:57.765814 1693723 cli_runner.go:164] Run: docker run --rm --name multinode-279658-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-279658 --entrypoint /usr/bin/test -v multinode-279658:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 11:12:58.327925 1693723 oci.go:107] Successfully prepared a docker volume multinode-279658
	I0115 11:12:58.327982 1693723 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 11:12:58.328002 1693723 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 11:12:58.328081 1693723 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-279658:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 11:13:02.832662 1693723 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-279658:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.504543573s)
	I0115 11:13:02.832693 1693723 kic.go:203] duration metric: took 4.504688 seconds to extract preloaded images to volume
	W0115 11:13:02.832841 1693723 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 11:13:02.832955 1693723 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 11:13:02.905142 1693723 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-279658 --name multinode-279658 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-279658 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-279658 --network multinode-279658 --ip 192.168.58.2 --volume multinode-279658:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 11:13:03.260033 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658 --format={{.State.Running}}
	I0115 11:13:03.286748 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658 --format={{.State.Status}}
	I0115 11:13:03.312021 1693723 cli_runner.go:164] Run: docker exec multinode-279658 stat /var/lib/dpkg/alternatives/iptables
	I0115 11:13:03.377855 1693723 oci.go:144] the created container "multinode-279658" has a running status.
	I0115 11:13:03.377885 1693723 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa...
	I0115 11:13:03.626110 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0115 11:13:03.626159 1693723 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 11:13:03.651915 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658 --format={{.State.Status}}
	I0115 11:13:03.676838 1693723 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 11:13:03.676864 1693723 kic_runner.go:114] Args: [docker exec --privileged multinode-279658 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 11:13:03.766247 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658 --format={{.State.Status}}
	I0115 11:13:03.789329 1693723 machine.go:88] provisioning docker machine ...
	I0115 11:13:03.789362 1693723 ubuntu.go:169] provisioning hostname "multinode-279658"
	I0115 11:13:03.789422 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:03.816790 1693723 main.go:141] libmachine: Using SSH client type: native
	I0115 11:13:03.817261 1693723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34794 <nil> <nil>}
	I0115 11:13:03.817274 1693723 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-279658 && echo "multinode-279658" | sudo tee /etc/hostname
	I0115 11:13:03.817967 1693723 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0115 11:13:06.977168 1693723 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-279658
	
	I0115 11:13:06.977336 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:06.996225 1693723 main.go:141] libmachine: Using SSH client type: native
	I0115 11:13:06.996628 1693723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34794 <nil> <nil>}
	I0115 11:13:06.996647 1693723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-279658' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-279658/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-279658' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 11:13:07.135486 1693723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 11:13:07.135518 1693723 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17953-1625104/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-1625104/.minikube}
	I0115 11:13:07.135540 1693723 ubuntu.go:177] setting up certificates
	I0115 11:13:07.135552 1693723 provision.go:83] configureAuth start
	I0115 11:13:07.135612 1693723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279658
	I0115 11:13:07.152479 1693723 provision.go:138] copyHostCerts
	I0115 11:13:07.152516 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem
	I0115 11:13:07.152545 1693723 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem, removing ...
	I0115 11:13:07.152555 1693723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem
	I0115 11:13:07.152631 1693723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem (1082 bytes)
	I0115 11:13:07.152711 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem
	I0115 11:13:07.152729 1693723 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem, removing ...
	I0115 11:13:07.152734 1693723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem
	I0115 11:13:07.152760 1693723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem (1123 bytes)
	I0115 11:13:07.152796 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem
	I0115 11:13:07.152811 1693723 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem, removing ...
	I0115 11:13:07.152815 1693723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem
	I0115 11:13:07.152956 1693723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem (1675 bytes)
	I0115 11:13:07.153047 1693723 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem org=jenkins.multinode-279658 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-279658]
	I0115 11:13:07.690452 1693723 provision.go:172] copyRemoteCerts
	I0115 11:13:07.690540 1693723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 11:13:07.690593 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:07.708145 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34794 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa Username:docker}
	I0115 11:13:07.809356 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 11:13:07.809435 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 11:13:07.839746 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 11:13:07.839847 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 11:13:07.871648 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 11:13:07.871725 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0115 11:13:07.901155 1693723 provision.go:86] duration metric: configureAuth took 765.589767ms
	I0115 11:13:07.901220 1693723 ubuntu.go:193] setting minikube options for container-runtime
	I0115 11:13:07.901425 1693723 config.go:182] Loaded profile config "multinode-279658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:13:07.901538 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:07.921155 1693723 main.go:141] libmachine: Using SSH client type: native
	I0115 11:13:07.921582 1693723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34794 <nil> <nil>}
	I0115 11:13:07.921606 1693723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 11:13:08.187591 1693723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 11:13:08.187618 1693723 machine.go:91] provisioned docker machine in 4.398267368s
	I0115 11:13:08.187629 1693723 client.go:171] LocalClient.Create took 10.582615387s
	I0115 11:13:08.187643 1693723 start.go:167] duration metric: libmachine.API.Create for "multinode-279658" took 10.582696124s
	I0115 11:13:08.187662 1693723 start.go:300] post-start starting for "multinode-279658" (driver="docker")
	I0115 11:13:08.187677 1693723 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 11:13:08.187743 1693723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 11:13:08.187793 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:08.206398 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34794 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa Username:docker}
	I0115 11:13:08.305502 1693723 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 11:13:08.309555 1693723 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0115 11:13:08.309576 1693723 command_runner.go:130] > NAME="Ubuntu"
	I0115 11:13:08.309584 1693723 command_runner.go:130] > VERSION_ID="22.04"
	I0115 11:13:08.309591 1693723 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0115 11:13:08.309597 1693723 command_runner.go:130] > VERSION_CODENAME=jammy
	I0115 11:13:08.309602 1693723 command_runner.go:130] > ID=ubuntu
	I0115 11:13:08.309607 1693723 command_runner.go:130] > ID_LIKE=debian
	I0115 11:13:08.309613 1693723 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0115 11:13:08.309619 1693723 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0115 11:13:08.309631 1693723 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0115 11:13:08.309642 1693723 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0115 11:13:08.309653 1693723 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0115 11:13:08.309709 1693723 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 11:13:08.309738 1693723 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 11:13:08.309754 1693723 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 11:13:08.309763 1693723 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 11:13:08.309780 1693723 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-1625104/.minikube/addons for local assets ...
	I0115 11:13:08.309843 1693723 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-1625104/.minikube/files for local assets ...
	I0115 11:13:08.309922 1693723 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem -> 16304352.pem in /etc/ssl/certs
	I0115 11:13:08.309933 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem -> /etc/ssl/certs/16304352.pem
	I0115 11:13:08.310036 1693723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 11:13:08.320813 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem --> /etc/ssl/certs/16304352.pem (1708 bytes)
	I0115 11:13:08.350538 1693723 start.go:303] post-start completed in 162.856296ms
	I0115 11:13:08.350911 1693723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279658
	I0115 11:13:08.368121 1693723 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/config.json ...
	I0115 11:13:08.368445 1693723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:13:08.368499 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:08.386135 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34794 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa Username:docker}
	I0115 11:13:08.480177 1693723 command_runner.go:130] > 12%!
	(MISSING)I0115 11:13:08.480255 1693723 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 11:13:08.485986 1693723 command_runner.go:130] > 172G
	I0115 11:13:08.486020 1693723 start.go:128] duration metric: createHost completed in 10.884445996s
	I0115 11:13:08.486033 1693723 start.go:83] releasing machines lock for "multinode-279658", held for 10.884571532s
	I0115 11:13:08.486109 1693723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279658
	I0115 11:13:08.503245 1693723 ssh_runner.go:195] Run: cat /version.json
	I0115 11:13:08.503306 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:08.503562 1693723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 11:13:08.503619 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:08.523639 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34794 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa Username:docker}
	I0115 11:13:08.531640 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34794 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa Username:docker}
	I0115 11:13:08.753798 1693723 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0115 11:13:08.757004 1693723 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1704759386-17866", "minikube_version": "v1.32.0", "commit": "3c45a4d018cdc90b01d9bcb479fb293aad58ed8f"}
	I0115 11:13:08.757164 1693723 ssh_runner.go:195] Run: systemctl --version
	I0115 11:13:08.762308 1693723 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0115 11:13:08.762350 1693723 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0115 11:13:08.762727 1693723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 11:13:08.910160 1693723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 11:13:08.917736 1693723 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0115 11:13:08.917769 1693723 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0115 11:13:08.917778 1693723 command_runner.go:130] > Device: 3ah/58d	Inode: 1823271     Links: 1
	I0115 11:13:08.917786 1693723 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 11:13:08.917801 1693723 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0115 11:13:08.917809 1693723 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0115 11:13:08.917818 1693723 command_runner.go:130] > Change: 2024-01-15 10:51:10.451580077 +0000
	I0115 11:13:08.917824 1693723 command_runner.go:130] >  Birth: 2024-01-15 10:51:10.451580077 +0000
	I0115 11:13:08.918354 1693723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 11:13:08.946229 1693723 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0115 11:13:08.946362 1693723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 11:13:08.985204 1693723 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0115 11:13:08.985319 1693723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 11:13:08.985352 1693723 start.go:475] detecting cgroup driver to use...
	I0115 11:13:08.985407 1693723 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 11:13:08.985486 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 11:13:09.008151 1693723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 11:13:09.023156 1693723 docker.go:217] disabling cri-docker service (if available) ...
	I0115 11:13:09.023250 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 11:13:09.040437 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 11:13:09.058269 1693723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 11:13:09.162842 1693723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 11:13:09.276182 1693723 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0115 11:13:09.276233 1693723 docker.go:233] disabling docker service ...
	I0115 11:13:09.276309 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 11:13:09.299431 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 11:13:09.314019 1693723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 11:13:09.420496 1693723 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0115 11:13:09.420576 1693723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 11:13:09.524283 1693723 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0115 11:13:09.524364 1693723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 11:13:09.538486 1693723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 11:13:09.557777 1693723 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0115 11:13:09.559394 1693723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 11:13:09.559467 1693723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:13:09.572957 1693723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 11:13:09.573030 1693723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:13:09.587680 1693723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:13:09.601677 1693723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:13:09.614362 1693723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 11:13:09.626972 1693723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 11:13:09.638942 1693723 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0115 11:13:09.639023 1693723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 11:13:09.649842 1693723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 11:13:09.748326 1693723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 11:13:09.865381 1693723 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 11:13:09.865496 1693723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 11:13:09.870038 1693723 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0115 11:13:09.870061 1693723 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0115 11:13:09.870069 1693723 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0115 11:13:09.870078 1693723 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 11:13:09.870084 1693723 command_runner.go:130] > Access: 2024-01-15 11:13:09.846334555 +0000
	I0115 11:13:09.870094 1693723 command_runner.go:130] > Modify: 2024-01-15 11:13:09.846334555 +0000
	I0115 11:13:09.870101 1693723 command_runner.go:130] > Change: 2024-01-15 11:13:09.846334555 +0000
	I0115 11:13:09.870109 1693723 command_runner.go:130] >  Birth: -
	I0115 11:13:09.870141 1693723 start.go:543] Will wait 60s for crictl version
	I0115 11:13:09.870193 1693723 ssh_runner.go:195] Run: which crictl
	I0115 11:13:09.874688 1693723 command_runner.go:130] > /usr/bin/crictl
	I0115 11:13:09.874782 1693723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 11:13:09.916755 1693723 command_runner.go:130] > Version:  0.1.0
	I0115 11:13:09.916779 1693723 command_runner.go:130] > RuntimeName:  cri-o
	I0115 11:13:09.916786 1693723 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0115 11:13:09.916792 1693723 command_runner.go:130] > RuntimeApiVersion:  v1
	I0115 11:13:09.916803 1693723 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0115 11:13:09.916870 1693723 ssh_runner.go:195] Run: crio --version
	I0115 11:13:09.959659 1693723 command_runner.go:130] > crio version 1.24.6
	I0115 11:13:09.959683 1693723 command_runner.go:130] > Version:          1.24.6
	I0115 11:13:09.959692 1693723 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0115 11:13:09.959698 1693723 command_runner.go:130] > GitTreeState:     clean
	I0115 11:13:09.959705 1693723 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0115 11:13:09.959711 1693723 command_runner.go:130] > GoVersion:        go1.18.2
	I0115 11:13:09.959716 1693723 command_runner.go:130] > Compiler:         gc
	I0115 11:13:09.959721 1693723 command_runner.go:130] > Platform:         linux/arm64
	I0115 11:13:09.959728 1693723 command_runner.go:130] > Linkmode:         dynamic
	I0115 11:13:09.959742 1693723 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 11:13:09.959750 1693723 command_runner.go:130] > SeccompEnabled:   true
	I0115 11:13:09.959756 1693723 command_runner.go:130] > AppArmorEnabled:  false
	I0115 11:13:09.961953 1693723 ssh_runner.go:195] Run: crio --version
	I0115 11:13:10.013987 1693723 command_runner.go:130] > crio version 1.24.6
	I0115 11:13:10.014011 1693723 command_runner.go:130] > Version:          1.24.6
	I0115 11:13:10.014020 1693723 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0115 11:13:10.014026 1693723 command_runner.go:130] > GitTreeState:     clean
	I0115 11:13:10.014033 1693723 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0115 11:13:10.014039 1693723 command_runner.go:130] > GoVersion:        go1.18.2
	I0115 11:13:10.014044 1693723 command_runner.go:130] > Compiler:         gc
	I0115 11:13:10.014049 1693723 command_runner.go:130] > Platform:         linux/arm64
	I0115 11:13:10.014056 1693723 command_runner.go:130] > Linkmode:         dynamic
	I0115 11:13:10.014068 1693723 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 11:13:10.014074 1693723 command_runner.go:130] > SeccompEnabled:   true
	I0115 11:13:10.014082 1693723 command_runner.go:130] > AppArmorEnabled:  false
	I0115 11:13:10.018817 1693723 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0115 11:13:10.021270 1693723 cli_runner.go:164] Run: docker network inspect multinode-279658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 11:13:10.044906 1693723 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0115 11:13:10.050171 1693723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:13:10.064475 1693723 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 11:13:10.064549 1693723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 11:13:10.137934 1693723 command_runner.go:130] > {
	I0115 11:13:10.137953 1693723 command_runner.go:130] >   "images": [
	I0115 11:13:10.137959 1693723 command_runner.go:130] >     {
	I0115 11:13:10.137969 1693723 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0115 11:13:10.137975 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.137982 1693723 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0115 11:13:10.137987 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.137993 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.138004 1693723 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0115 11:13:10.138013 1693723 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0115 11:13:10.138026 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138035 1693723 command_runner.go:130] >       "size": "60867618",
	I0115 11:13:10.138041 1693723 command_runner.go:130] >       "uid": null,
	I0115 11:13:10.138046 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.138052 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.138061 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.138066 1693723 command_runner.go:130] >     },
	I0115 11:13:10.138070 1693723 command_runner.go:130] >     {
	I0115 11:13:10.138081 1693723 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0115 11:13:10.138091 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.138098 1693723 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0115 11:13:10.138104 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138112 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.138126 1693723 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0115 11:13:10.138141 1693723 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0115 11:13:10.138151 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138161 1693723 command_runner.go:130] >       "size": "29037500",
	I0115 11:13:10.138172 1693723 command_runner.go:130] >       "uid": null,
	I0115 11:13:10.138177 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.138185 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.138190 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.138198 1693723 command_runner.go:130] >     },
	I0115 11:13:10.138203 1693723 command_runner.go:130] >     {
	I0115 11:13:10.138214 1693723 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0115 11:13:10.138222 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.138229 1693723 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0115 11:13:10.138236 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138241 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.138250 1693723 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0115 11:13:10.138260 1693723 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0115 11:13:10.138267 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138273 1693723 command_runner.go:130] >       "size": "51393451",
	I0115 11:13:10.138330 1693723 command_runner.go:130] >       "uid": null,
	I0115 11:13:10.138335 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.138343 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.138348 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.138353 1693723 command_runner.go:130] >     },
	I0115 11:13:10.138363 1693723 command_runner.go:130] >     {
	I0115 11:13:10.138375 1693723 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0115 11:13:10.138383 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.138390 1693723 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0115 11:13:10.138397 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138403 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.138415 1693723 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0115 11:13:10.138424 1693723 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0115 11:13:10.138439 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138446 1693723 command_runner.go:130] >       "size": "182203183",
	I0115 11:13:10.138456 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.138461 1693723 command_runner.go:130] >         "value": "0"
	I0115 11:13:10.138469 1693723 command_runner.go:130] >       },
	I0115 11:13:10.138474 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.138479 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.138488 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.138492 1693723 command_runner.go:130] >     },
	I0115 11:13:10.138500 1693723 command_runner.go:130] >     {
	I0115 11:13:10.138510 1693723 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0115 11:13:10.138515 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.138522 1693723 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0115 11:13:10.138530 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138536 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.138548 1693723 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0115 11:13:10.138561 1693723 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0115 11:13:10.138570 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138575 1693723 command_runner.go:130] >       "size": "121119694",
	I0115 11:13:10.138583 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.138588 1693723 command_runner.go:130] >         "value": "0"
	I0115 11:13:10.138593 1693723 command_runner.go:130] >       },
	I0115 11:13:10.138598 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.138603 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.138609 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.138623 1693723 command_runner.go:130] >     },
	I0115 11:13:10.138628 1693723 command_runner.go:130] >     {
	I0115 11:13:10.138641 1693723 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0115 11:13:10.138648 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.138655 1693723 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0115 11:13:10.138663 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138668 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.138709 1693723 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0115 11:13:10.138725 1693723 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0115 11:13:10.138730 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138738 1693723 command_runner.go:130] >       "size": "117252916",
	I0115 11:13:10.138747 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.138752 1693723 command_runner.go:130] >         "value": "0"
	I0115 11:13:10.138760 1693723 command_runner.go:130] >       },
	I0115 11:13:10.138765 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.138769 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.138774 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.138781 1693723 command_runner.go:130] >     },
	I0115 11:13:10.138785 1693723 command_runner.go:130] >     {
	I0115 11:13:10.138796 1693723 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0115 11:13:10.138804 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.138814 1693723 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0115 11:13:10.138822 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138827 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.138840 1693723 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0115 11:13:10.138852 1693723 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0115 11:13:10.138857 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138865 1693723 command_runner.go:130] >       "size": "69992343",
	I0115 11:13:10.138873 1693723 command_runner.go:130] >       "uid": null,
	I0115 11:13:10.138878 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.138886 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.138892 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.138899 1693723 command_runner.go:130] >     },
	I0115 11:13:10.138904 1693723 command_runner.go:130] >     {
	I0115 11:13:10.138918 1693723 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0115 11:13:10.138927 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.138933 1693723 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0115 11:13:10.138938 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138942 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.138967 1693723 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0115 11:13:10.138981 1693723 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0115 11:13:10.138990 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.138995 1693723 command_runner.go:130] >       "size": "59253556",
	I0115 11:13:10.139003 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.139008 1693723 command_runner.go:130] >         "value": "0"
	I0115 11:13:10.139015 1693723 command_runner.go:130] >       },
	I0115 11:13:10.139020 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.139025 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.139030 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.139034 1693723 command_runner.go:130] >     },
	I0115 11:13:10.139041 1693723 command_runner.go:130] >     {
	I0115 11:13:10.139049 1693723 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0115 11:13:10.139057 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.139063 1693723 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0115 11:13:10.139070 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.139075 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.139084 1693723 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0115 11:13:10.139099 1693723 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0115 11:13:10.139104 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.139109 1693723 command_runner.go:130] >       "size": "520014",
	I0115 11:13:10.139114 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.139121 1693723 command_runner.go:130] >         "value": "65535"
	I0115 11:13:10.139130 1693723 command_runner.go:130] >       },
	I0115 11:13:10.139135 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.139143 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.139148 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.139155 1693723 command_runner.go:130] >     }
	I0115 11:13:10.139160 1693723 command_runner.go:130] >   ]
	I0115 11:13:10.139167 1693723 command_runner.go:130] > }
	I0115 11:13:10.142203 1693723 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 11:13:10.142226 1693723 crio.go:415] Images already preloaded, skipping extraction
	I0115 11:13:10.142298 1693723 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 11:13:10.194548 1693723 command_runner.go:130] > {
	I0115 11:13:10.194573 1693723 command_runner.go:130] >   "images": [
	I0115 11:13:10.194579 1693723 command_runner.go:130] >     {
	I0115 11:13:10.194589 1693723 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0115 11:13:10.194595 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.194602 1693723 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0115 11:13:10.194610 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.194616 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.194629 1693723 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0115 11:13:10.194642 1693723 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0115 11:13:10.194646 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.194651 1693723 command_runner.go:130] >       "size": "60867618",
	I0115 11:13:10.194660 1693723 command_runner.go:130] >       "uid": null,
	I0115 11:13:10.194665 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.194674 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.194689 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.194694 1693723 command_runner.go:130] >     },
	I0115 11:13:10.194699 1693723 command_runner.go:130] >     {
	I0115 11:13:10.194709 1693723 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0115 11:13:10.194717 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.194724 1693723 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0115 11:13:10.194728 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.194733 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.194743 1693723 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0115 11:13:10.194753 1693723 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0115 11:13:10.194757 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.194764 1693723 command_runner.go:130] >       "size": "29037500",
	I0115 11:13:10.194769 1693723 command_runner.go:130] >       "uid": null,
	I0115 11:13:10.194773 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.194778 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.194783 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.194787 1693723 command_runner.go:130] >     },
	I0115 11:13:10.194792 1693723 command_runner.go:130] >     {
	I0115 11:13:10.194805 1693723 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0115 11:13:10.194810 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.194820 1693723 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0115 11:13:10.194824 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.194834 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.194844 1693723 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0115 11:13:10.194853 1693723 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0115 11:13:10.194861 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.194866 1693723 command_runner.go:130] >       "size": "51393451",
	I0115 11:13:10.194871 1693723 command_runner.go:130] >       "uid": null,
	I0115 11:13:10.194878 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.194888 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.194893 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.194897 1693723 command_runner.go:130] >     },
	I0115 11:13:10.194902 1693723 command_runner.go:130] >     {
	I0115 11:13:10.194912 1693723 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0115 11:13:10.194917 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.194925 1693723 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0115 11:13:10.194930 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.194935 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.194946 1693723 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0115 11:13:10.194955 1693723 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0115 11:13:10.194969 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.194977 1693723 command_runner.go:130] >       "size": "182203183",
	I0115 11:13:10.194982 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.194986 1693723 command_runner.go:130] >         "value": "0"
	I0115 11:13:10.194993 1693723 command_runner.go:130] >       },
	I0115 11:13:10.194998 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.195006 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.195010 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.195015 1693723 command_runner.go:130] >     },
	I0115 11:13:10.195022 1693723 command_runner.go:130] >     {
	I0115 11:13:10.195029 1693723 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0115 11:13:10.195034 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.195043 1693723 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0115 11:13:10.195049 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195064 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.195073 1693723 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0115 11:13:10.195083 1693723 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0115 11:13:10.195090 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195099 1693723 command_runner.go:130] >       "size": "121119694",
	I0115 11:13:10.195106 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.195111 1693723 command_runner.go:130] >         "value": "0"
	I0115 11:13:10.195115 1693723 command_runner.go:130] >       },
	I0115 11:13:10.195120 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.195125 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.195132 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.195137 1693723 command_runner.go:130] >     },
	I0115 11:13:10.195144 1693723 command_runner.go:130] >     {
	I0115 11:13:10.195151 1693723 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0115 11:13:10.195156 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.195165 1693723 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0115 11:13:10.195169 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195174 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.195187 1693723 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0115 11:13:10.195202 1693723 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0115 11:13:10.195211 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195217 1693723 command_runner.go:130] >       "size": "117252916",
	I0115 11:13:10.195224 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.195232 1693723 command_runner.go:130] >         "value": "0"
	I0115 11:13:10.195236 1693723 command_runner.go:130] >       },
	I0115 11:13:10.195241 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.195249 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.195254 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.195258 1693723 command_runner.go:130] >     },
	I0115 11:13:10.195266 1693723 command_runner.go:130] >     {
	I0115 11:13:10.195273 1693723 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0115 11:13:10.195278 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.195284 1693723 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0115 11:13:10.195291 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195296 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.195305 1693723 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0115 11:13:10.195316 1693723 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0115 11:13:10.195321 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195328 1693723 command_runner.go:130] >       "size": "69992343",
	I0115 11:13:10.195333 1693723 command_runner.go:130] >       "uid": null,
	I0115 11:13:10.195340 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.195348 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.195353 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.195356 1693723 command_runner.go:130] >     },
	I0115 11:13:10.195360 1693723 command_runner.go:130] >     {
	I0115 11:13:10.195370 1693723 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0115 11:13:10.195378 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.195384 1693723 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0115 11:13:10.195388 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195393 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.195418 1693723 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0115 11:13:10.195431 1693723 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0115 11:13:10.195436 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195441 1693723 command_runner.go:130] >       "size": "59253556",
	I0115 11:13:10.195446 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.195453 1693723 command_runner.go:130] >         "value": "0"
	I0115 11:13:10.195457 1693723 command_runner.go:130] >       },
	I0115 11:13:10.195464 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.195471 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.195476 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.195480 1693723 command_runner.go:130] >     },
	I0115 11:13:10.195486 1693723 command_runner.go:130] >     {
	I0115 11:13:10.195494 1693723 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0115 11:13:10.195502 1693723 command_runner.go:130] >       "repoTags": [
	I0115 11:13:10.195508 1693723 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0115 11:13:10.195513 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195518 1693723 command_runner.go:130] >       "repoDigests": [
	I0115 11:13:10.195527 1693723 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0115 11:13:10.195539 1693723 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0115 11:13:10.195544 1693723 command_runner.go:130] >       ],
	I0115 11:13:10.195552 1693723 command_runner.go:130] >       "size": "520014",
	I0115 11:13:10.195557 1693723 command_runner.go:130] >       "uid": {
	I0115 11:13:10.195562 1693723 command_runner.go:130] >         "value": "65535"
	I0115 11:13:10.195568 1693723 command_runner.go:130] >       },
	I0115 11:13:10.195573 1693723 command_runner.go:130] >       "username": "",
	I0115 11:13:10.195578 1693723 command_runner.go:130] >       "spec": null,
	I0115 11:13:10.195587 1693723 command_runner.go:130] >       "pinned": false
	I0115 11:13:10.195591 1693723 command_runner.go:130] >     }
	I0115 11:13:10.195595 1693723 command_runner.go:130] >   ]
	I0115 11:13:10.195599 1693723 command_runner.go:130] > }
	I0115 11:13:10.195734 1693723 crio.go:496] all images are preloaded for cri-o runtime.
	I0115 11:13:10.195748 1693723 cache_images.go:84] Images are preloaded, skipping loading
	I0115 11:13:10.195822 1693723 ssh_runner.go:195] Run: crio config
	I0115 11:13:10.257366 1693723 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0115 11:13:10.257391 1693723 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0115 11:13:10.257399 1693723 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0115 11:13:10.257404 1693723 command_runner.go:130] > #
	I0115 11:13:10.257413 1693723 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0115 11:13:10.257421 1693723 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0115 11:13:10.257428 1693723 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0115 11:13:10.257440 1693723 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0115 11:13:10.257445 1693723 command_runner.go:130] > # reload'.
	I0115 11:13:10.257452 1693723 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0115 11:13:10.257460 1693723 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0115 11:13:10.257467 1693723 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0115 11:13:10.257474 1693723 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0115 11:13:10.257478 1693723 command_runner.go:130] > [crio]
	I0115 11:13:10.257485 1693723 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0115 11:13:10.257491 1693723 command_runner.go:130] > # containers images, in this directory.
	I0115 11:13:10.257508 1693723 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0115 11:13:10.257527 1693723 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0115 11:13:10.257534 1693723 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0115 11:13:10.257542 1693723 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0115 11:13:10.257549 1693723 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0115 11:13:10.257554 1693723 command_runner.go:130] > # storage_driver = "vfs"
	I0115 11:13:10.257561 1693723 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0115 11:13:10.257569 1693723 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0115 11:13:10.257573 1693723 command_runner.go:130] > # storage_option = [
	I0115 11:13:10.257577 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.257585 1693723 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0115 11:13:10.257593 1693723 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0115 11:13:10.257599 1693723 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0115 11:13:10.257608 1693723 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0115 11:13:10.257615 1693723 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0115 11:13:10.257621 1693723 command_runner.go:130] > # always happen on a node reboot
	I0115 11:13:10.257626 1693723 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0115 11:13:10.257633 1693723 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0115 11:13:10.257640 1693723 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0115 11:13:10.257654 1693723 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0115 11:13:10.257660 1693723 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0115 11:13:10.257670 1693723 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0115 11:13:10.257680 1693723 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0115 11:13:10.257685 1693723 command_runner.go:130] > # internal_wipe = true
	I0115 11:13:10.257691 1693723 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0115 11:13:10.257698 1693723 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0115 11:13:10.257704 1693723 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0115 11:13:10.257711 1693723 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0115 11:13:10.257719 1693723 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0115 11:13:10.257723 1693723 command_runner.go:130] > [crio.api]
	I0115 11:13:10.257729 1693723 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0115 11:13:10.258073 1693723 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0115 11:13:10.258092 1693723 command_runner.go:130] > # IP address on which the stream server will listen.
	I0115 11:13:10.258099 1693723 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0115 11:13:10.258128 1693723 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0115 11:13:10.258141 1693723 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0115 11:13:10.258147 1693723 command_runner.go:130] > # stream_port = "0"
	I0115 11:13:10.258172 1693723 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0115 11:13:10.258182 1693723 command_runner.go:130] > # stream_enable_tls = false
	I0115 11:13:10.258201 1693723 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0115 11:13:10.258214 1693723 command_runner.go:130] > # stream_idle_timeout = ""
	I0115 11:13:10.258222 1693723 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0115 11:13:10.258234 1693723 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0115 11:13:10.258239 1693723 command_runner.go:130] > # minutes.
	I0115 11:13:10.258438 1693723 command_runner.go:130] > # stream_tls_cert = ""
	I0115 11:13:10.258456 1693723 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0115 11:13:10.258465 1693723 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0115 11:13:10.258470 1693723 command_runner.go:130] > # stream_tls_key = ""
	I0115 11:13:10.258477 1693723 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0115 11:13:10.258485 1693723 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0115 11:13:10.258494 1693723 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0115 11:13:10.258500 1693723 command_runner.go:130] > # stream_tls_ca = ""
	I0115 11:13:10.258513 1693723 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 11:13:10.258519 1693723 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0115 11:13:10.258531 1693723 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 11:13:10.258547 1693723 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0115 11:13:10.258568 1693723 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0115 11:13:10.258578 1693723 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0115 11:13:10.258583 1693723 command_runner.go:130] > [crio.runtime]
	I0115 11:13:10.258594 1693723 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0115 11:13:10.258601 1693723 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0115 11:13:10.258609 1693723 command_runner.go:130] > # "nofile=1024:2048"
	I0115 11:13:10.258617 1693723 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0115 11:13:10.258627 1693723 command_runner.go:130] > # default_ulimits = [
	I0115 11:13:10.258631 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.258638 1693723 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0115 11:13:10.258650 1693723 command_runner.go:130] > # no_pivot = false
	I0115 11:13:10.258657 1693723 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0115 11:13:10.258664 1693723 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0115 11:13:10.258674 1693723 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0115 11:13:10.258690 1693723 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0115 11:13:10.258701 1693723 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0115 11:13:10.258711 1693723 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 11:13:10.258721 1693723 command_runner.go:130] > # conmon = ""
	I0115 11:13:10.258727 1693723 command_runner.go:130] > # Cgroup setting for conmon
	I0115 11:13:10.258735 1693723 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0115 11:13:10.258742 1693723 command_runner.go:130] > conmon_cgroup = "pod"
	I0115 11:13:10.258751 1693723 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0115 11:13:10.258761 1693723 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0115 11:13:10.258769 1693723 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 11:13:10.258778 1693723 command_runner.go:130] > # conmon_env = [
	I0115 11:13:10.258782 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.258789 1693723 command_runner.go:130] > # Additional environment variables to set for all the
	I0115 11:13:10.258798 1693723 command_runner.go:130] > # containers. These are overridden if set in the
	I0115 11:13:10.258806 1693723 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0115 11:13:10.258810 1693723 command_runner.go:130] > # default_env = [
	I0115 11:13:10.258814 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.258825 1693723 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0115 11:13:10.259049 1693723 command_runner.go:130] > # selinux = false
	I0115 11:13:10.259067 1693723 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0115 11:13:10.259076 1693723 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0115 11:13:10.259083 1693723 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0115 11:13:10.259089 1693723 command_runner.go:130] > # seccomp_profile = ""
	I0115 11:13:10.259099 1693723 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0115 11:13:10.259107 1693723 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0115 11:13:10.259122 1693723 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0115 11:13:10.259128 1693723 command_runner.go:130] > # which might increase security.
	I0115 11:13:10.259143 1693723 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0115 11:13:10.259151 1693723 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0115 11:13:10.259162 1693723 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0115 11:13:10.259170 1693723 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0115 11:13:10.259178 1693723 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0115 11:13:10.259186 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:13:10.259192 1693723 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0115 11:13:10.259203 1693723 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0115 11:13:10.259208 1693723 command_runner.go:130] > # the cgroup blockio controller.
	I0115 11:13:10.259220 1693723 command_runner.go:130] > # blockio_config_file = ""
	I0115 11:13:10.259232 1693723 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0115 11:13:10.259240 1693723 command_runner.go:130] > # irqbalance daemon.
	I0115 11:13:10.259247 1693723 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0115 11:13:10.259258 1693723 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0115 11:13:10.259270 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:13:10.259275 1693723 command_runner.go:130] > # rdt_config_file = ""
	I0115 11:13:10.259282 1693723 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0115 11:13:10.259298 1693723 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0115 11:13:10.259307 1693723 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0115 11:13:10.259495 1693723 command_runner.go:130] > # separate_pull_cgroup = ""
	I0115 11:13:10.259539 1693723 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0115 11:13:10.259554 1693723 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0115 11:13:10.259571 1693723 command_runner.go:130] > # will be added.
	I0115 11:13:10.259581 1693723 command_runner.go:130] > # default_capabilities = [
	I0115 11:13:10.260496 1693723 command_runner.go:130] > # 	"CHOWN",
	I0115 11:13:10.260517 1693723 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0115 11:13:10.260522 1693723 command_runner.go:130] > # 	"FSETID",
	I0115 11:13:10.260527 1693723 command_runner.go:130] > # 	"FOWNER",
	I0115 11:13:10.260532 1693723 command_runner.go:130] > # 	"SETGID",
	I0115 11:13:10.260569 1693723 command_runner.go:130] > # 	"SETUID",
	I0115 11:13:10.260575 1693723 command_runner.go:130] > # 	"SETPCAP",
	I0115 11:13:10.260698 1693723 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0115 11:13:10.260710 1693723 command_runner.go:130] > # 	"KILL",
	I0115 11:13:10.260715 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.260740 1693723 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0115 11:13:10.260756 1693723 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0115 11:13:10.260769 1693723 command_runner.go:130] > # add_inheritable_capabilities = true
	I0115 11:13:10.260779 1693723 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0115 11:13:10.260793 1693723 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 11:13:10.260798 1693723 command_runner.go:130] > # default_sysctls = [
	I0115 11:13:10.260964 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.260984 1693723 command_runner.go:130] > # List of devices on the host that a
	I0115 11:13:10.261005 1693723 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0115 11:13:10.261017 1693723 command_runner.go:130] > # allowed_devices = [
	I0115 11:13:10.261022 1693723 command_runner.go:130] > # 	"/dev/fuse",
	I0115 11:13:10.261026 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.261033 1693723 command_runner.go:130] > # List of additional devices. specified as
	I0115 11:13:10.261064 1693723 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0115 11:13:10.261094 1693723 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0115 11:13:10.261102 1693723 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 11:13:10.261120 1693723 command_runner.go:130] > # additional_devices = [
	I0115 11:13:10.261296 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.261311 1693723 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0115 11:13:10.261317 1693723 command_runner.go:130] > # cdi_spec_dirs = [
	I0115 11:13:10.261333 1693723 command_runner.go:130] > # 	"/etc/cdi",
	I0115 11:13:10.261346 1693723 command_runner.go:130] > # 	"/var/run/cdi",
	I0115 11:13:10.261351 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.261359 1693723 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0115 11:13:10.261371 1693723 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0115 11:13:10.261380 1693723 command_runner.go:130] > # Defaults to false.
	I0115 11:13:10.261391 1693723 command_runner.go:130] > # device_ownership_from_security_context = false
	I0115 11:13:10.261427 1693723 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0115 11:13:10.261444 1693723 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0115 11:13:10.261449 1693723 command_runner.go:130] > # hooks_dir = [
	I0115 11:13:10.261473 1693723 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0115 11:13:10.261478 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.261485 1693723 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0115 11:13:10.261500 1693723 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0115 11:13:10.261507 1693723 command_runner.go:130] > # its default mounts from the following two files:
	I0115 11:13:10.261511 1693723 command_runner.go:130] > #
	I0115 11:13:10.261536 1693723 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0115 11:13:10.261551 1693723 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0115 11:13:10.261571 1693723 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0115 11:13:10.261581 1693723 command_runner.go:130] > #
	I0115 11:13:10.261589 1693723 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0115 11:13:10.261601 1693723 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0115 11:13:10.261614 1693723 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0115 11:13:10.261626 1693723 command_runner.go:130] > #      only add mounts it finds in this file.
	I0115 11:13:10.261643 1693723 command_runner.go:130] > #
	I0115 11:13:10.261672 1693723 command_runner.go:130] > # default_mounts_file = ""
	I0115 11:13:10.261688 1693723 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0115 11:13:10.261696 1693723 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0115 11:13:10.261705 1693723 command_runner.go:130] > # pids_limit = 0
	I0115 11:13:10.261713 1693723 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0115 11:13:10.261751 1693723 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0115 11:13:10.261767 1693723 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0115 11:13:10.261779 1693723 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0115 11:13:10.261936 1693723 command_runner.go:130] > # log_size_max = -1
	I0115 11:13:10.261955 1693723 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0115 11:13:10.261992 1693723 command_runner.go:130] > # log_to_journald = false
	I0115 11:13:10.262017 1693723 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0115 11:13:10.262028 1693723 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0115 11:13:10.262034 1693723 command_runner.go:130] > # Path to directory for container attach sockets.
	I0115 11:13:10.262041 1693723 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0115 11:13:10.262063 1693723 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0115 11:13:10.262082 1693723 command_runner.go:130] > # bind_mount_prefix = ""
	I0115 11:13:10.262097 1693723 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0115 11:13:10.262102 1693723 command_runner.go:130] > # read_only = false
	I0115 11:13:10.262120 1693723 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0115 11:13:10.262128 1693723 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0115 11:13:10.262137 1693723 command_runner.go:130] > # live configuration reload.
	I0115 11:13:10.262142 1693723 command_runner.go:130] > # log_level = "info"
	I0115 11:13:10.262149 1693723 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0115 11:13:10.262171 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:13:10.262177 1693723 command_runner.go:130] > # log_filter = ""
	I0115 11:13:10.262195 1693723 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0115 11:13:10.262208 1693723 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0115 11:13:10.262213 1693723 command_runner.go:130] > # separated by comma.
	I0115 11:13:10.262219 1693723 command_runner.go:130] > # uid_mappings = ""
	I0115 11:13:10.262227 1693723 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0115 11:13:10.262234 1693723 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0115 11:13:10.262243 1693723 command_runner.go:130] > # separated by comma.
	I0115 11:13:10.262247 1693723 command_runner.go:130] > # gid_mappings = ""
	I0115 11:13:10.262268 1693723 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0115 11:13:10.262318 1693723 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 11:13:10.262337 1693723 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 11:13:10.262348 1693723 command_runner.go:130] > # minimum_mappable_uid = -1
	I0115 11:13:10.262359 1693723 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0115 11:13:10.262371 1693723 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 11:13:10.262378 1693723 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 11:13:10.262533 1693723 command_runner.go:130] > # minimum_mappable_gid = -1
	I0115 11:13:10.262549 1693723 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0115 11:13:10.262580 1693723 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0115 11:13:10.262594 1693723 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0115 11:13:10.262600 1693723 command_runner.go:130] > # ctr_stop_timeout = 30
	I0115 11:13:10.262607 1693723 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0115 11:13:10.262614 1693723 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0115 11:13:10.262628 1693723 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0115 11:13:10.262645 1693723 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0115 11:13:10.262855 1693723 command_runner.go:130] > # drop_infra_ctr = true
	I0115 11:13:10.262871 1693723 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0115 11:13:10.262881 1693723 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0115 11:13:10.262913 1693723 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0115 11:13:10.262926 1693723 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0115 11:13:10.262934 1693723 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0115 11:13:10.262944 1693723 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0115 11:13:10.262950 1693723 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0115 11:13:10.262958 1693723 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0115 11:13:10.262963 1693723 command_runner.go:130] > # pinns_path = ""
	I0115 11:13:10.262995 1693723 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0115 11:13:10.263011 1693723 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0115 11:13:10.263032 1693723 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0115 11:13:10.263041 1693723 command_runner.go:130] > # default_runtime = "runc"
	I0115 11:13:10.263048 1693723 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0115 11:13:10.263062 1693723 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0115 11:13:10.263074 1693723 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0115 11:13:10.263089 1693723 command_runner.go:130] > # creation as a file is not desired either.
	I0115 11:13:10.263125 1693723 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0115 11:13:10.263140 1693723 command_runner.go:130] > # the hostname is being managed dynamically.
	I0115 11:13:10.263163 1693723 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0115 11:13:10.263174 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.263183 1693723 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0115 11:13:10.263199 1693723 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0115 11:13:10.263208 1693723 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0115 11:13:10.263243 1693723 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0115 11:13:10.263255 1693723 command_runner.go:130] > #
	I0115 11:13:10.263262 1693723 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0115 11:13:10.263279 1693723 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0115 11:13:10.263292 1693723 command_runner.go:130] > #  runtime_type = "oci"
	I0115 11:13:10.263298 1693723 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0115 11:13:10.263304 1693723 command_runner.go:130] > #  privileged_without_host_devices = false
	I0115 11:13:10.263313 1693723 command_runner.go:130] > #  allowed_annotations = []
	I0115 11:13:10.263318 1693723 command_runner.go:130] > # Where:
	I0115 11:13:10.263325 1693723 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0115 11:13:10.263356 1693723 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0115 11:13:10.263373 1693723 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0115 11:13:10.263394 1693723 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0115 11:13:10.263411 1693723 command_runner.go:130] > #   in $PATH.
	I0115 11:13:10.263426 1693723 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0115 11:13:10.263433 1693723 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0115 11:13:10.263440 1693723 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0115 11:13:10.263450 1693723 command_runner.go:130] > #   state.
	I0115 11:13:10.263469 1693723 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0115 11:13:10.263485 1693723 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0115 11:13:10.263502 1693723 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0115 11:13:10.263516 1693723 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0115 11:13:10.263524 1693723 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0115 11:13:10.263532 1693723 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0115 11:13:10.263543 1693723 command_runner.go:130] > #   The currently recognized values are:
	I0115 11:13:10.263551 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0115 11:13:10.263563 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0115 11:13:10.263582 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0115 11:13:10.263598 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0115 11:13:10.263608 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0115 11:13:10.263627 1693723 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0115 11:13:10.263647 1693723 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0115 11:13:10.263661 1693723 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0115 11:13:10.263667 1693723 command_runner.go:130] > #   should be moved to the container's cgroup
	I0115 11:13:10.263676 1693723 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0115 11:13:10.263683 1693723 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0115 11:13:10.263699 1693723 command_runner.go:130] > runtime_type = "oci"
	I0115 11:13:10.263712 1693723 command_runner.go:130] > runtime_root = "/run/runc"
	I0115 11:13:10.263728 1693723 command_runner.go:130] > runtime_config_path = ""
	I0115 11:13:10.263739 1693723 command_runner.go:130] > monitor_path = ""
	I0115 11:13:10.263746 1693723 command_runner.go:130] > monitor_cgroup = ""
	I0115 11:13:10.263751 1693723 command_runner.go:130] > monitor_exec_cgroup = ""
	I0115 11:13:10.263821 1693723 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0115 11:13:10.263835 1693723 command_runner.go:130] > # running containers
	I0115 11:13:10.263841 1693723 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0115 11:13:10.263849 1693723 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0115 11:13:10.263871 1693723 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0115 11:13:10.263885 1693723 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0115 11:13:10.263894 1693723 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0115 11:13:10.263906 1693723 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0115 11:13:10.263912 1693723 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0115 11:13:10.263922 1693723 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0115 11:13:10.263927 1693723 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0115 11:13:10.263943 1693723 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0115 11:13:10.263958 1693723 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0115 11:13:10.263974 1693723 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0115 11:13:10.263987 1693723 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0115 11:13:10.263997 1693723 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0115 11:13:10.264011 1693723 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0115 11:13:10.264018 1693723 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0115 11:13:10.264034 1693723 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0115 11:13:10.264054 1693723 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0115 11:13:10.264068 1693723 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0115 11:13:10.264086 1693723 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0115 11:13:10.264099 1693723 command_runner.go:130] > # Example:
	I0115 11:13:10.264105 1693723 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0115 11:13:10.264111 1693723 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0115 11:13:10.264125 1693723 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0115 11:13:10.264132 1693723 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0115 11:13:10.264140 1693723 command_runner.go:130] > # cpuset = 0
	I0115 11:13:10.264145 1693723 command_runner.go:130] > # cpushares = "0-1"
	I0115 11:13:10.264159 1693723 command_runner.go:130] > # Where:
	I0115 11:13:10.264170 1693723 command_runner.go:130] > # The workload name is workload-type.
	I0115 11:13:10.264193 1693723 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0115 11:13:10.264207 1693723 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0115 11:13:10.264215 1693723 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0115 11:13:10.264228 1693723 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0115 11:13:10.264236 1693723 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0115 11:13:10.264243 1693723 command_runner.go:130] > # 
	I0115 11:13:10.264251 1693723 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0115 11:13:10.264265 1693723 command_runner.go:130] > #
	I0115 11:13:10.264283 1693723 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0115 11:13:10.264303 1693723 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0115 11:13:10.264320 1693723 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0115 11:13:10.264329 1693723 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0115 11:13:10.264338 1693723 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0115 11:13:10.264348 1693723 command_runner.go:130] > [crio.image]
	I0115 11:13:10.264355 1693723 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0115 11:13:10.264375 1693723 command_runner.go:130] > # default_transport = "docker://"
	I0115 11:13:10.264390 1693723 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0115 11:13:10.264399 1693723 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0115 11:13:10.264414 1693723 command_runner.go:130] > # global_auth_file = ""
	I0115 11:13:10.264420 1693723 command_runner.go:130] > # The image used to instantiate infra containers.
	I0115 11:13:10.264431 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:13:10.264438 1693723 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0115 11:13:10.264473 1693723 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0115 11:13:10.264490 1693723 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0115 11:13:10.264497 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:13:10.264503 1693723 command_runner.go:130] > # pause_image_auth_file = ""
	I0115 11:13:10.264515 1693723 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0115 11:13:10.264523 1693723 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0115 11:13:10.264544 1693723 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0115 11:13:10.264559 1693723 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0115 11:13:10.264566 1693723 command_runner.go:130] > # pause_command = "/pause"
	I0115 11:13:10.264574 1693723 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0115 11:13:10.264582 1693723 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0115 11:13:10.264594 1693723 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0115 11:13:10.264601 1693723 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0115 11:13:10.264621 1693723 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0115 11:13:10.264855 1693723 command_runner.go:130] > # signature_policy = ""
	I0115 11:13:10.264872 1693723 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0115 11:13:10.264880 1693723 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0115 11:13:10.264885 1693723 command_runner.go:130] > # changing them here.
	I0115 11:13:10.264890 1693723 command_runner.go:130] > # insecure_registries = [
	I0115 11:13:10.264908 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.264922 1693723 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0115 11:13:10.264929 1693723 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0115 11:13:10.264936 1693723 command_runner.go:130] > # image_volumes = "mkdir"
	I0115 11:13:10.264943 1693723 command_runner.go:130] > # Temporary directory to use for storing big files
	I0115 11:13:10.264951 1693723 command_runner.go:130] > # big_files_temporary_dir = ""
	I0115 11:13:10.264959 1693723 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0115 11:13:10.264963 1693723 command_runner.go:130] > # CNI plugins.
	I0115 11:13:10.264969 1693723 command_runner.go:130] > [crio.network]
	I0115 11:13:10.265002 1693723 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0115 11:13:10.265020 1693723 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0115 11:13:10.265033 1693723 command_runner.go:130] > # cni_default_network = ""
	I0115 11:13:10.265043 1693723 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0115 11:13:10.265053 1693723 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0115 11:13:10.265060 1693723 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0115 11:13:10.265089 1693723 command_runner.go:130] > # plugin_dirs = [
	I0115 11:13:10.265102 1693723 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0115 11:13:10.265108 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.265115 1693723 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0115 11:13:10.265119 1693723 command_runner.go:130] > [crio.metrics]
	I0115 11:13:10.265126 1693723 command_runner.go:130] > # Globally enable or disable metrics support.
	I0115 11:13:10.265135 1693723 command_runner.go:130] > # enable_metrics = false
	I0115 11:13:10.265141 1693723 command_runner.go:130] > # Specify enabled metrics collectors.
	I0115 11:13:10.265146 1693723 command_runner.go:130] > # Per default all metrics are enabled.
	I0115 11:13:10.265167 1693723 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0115 11:13:10.265207 1693723 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0115 11:13:10.265224 1693723 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0115 11:13:10.265229 1693723 command_runner.go:130] > # metrics_collectors = [
	I0115 11:13:10.265238 1693723 command_runner.go:130] > # 	"operations",
	I0115 11:13:10.265244 1693723 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0115 11:13:10.265249 1693723 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0115 11:13:10.265257 1693723 command_runner.go:130] > # 	"operations_errors",
	I0115 11:13:10.265262 1693723 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0115 11:13:10.265267 1693723 command_runner.go:130] > # 	"image_pulls_by_name",
	I0115 11:13:10.265273 1693723 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0115 11:13:10.265293 1693723 command_runner.go:130] > # 	"image_pulls_failures",
	I0115 11:13:10.265298 1693723 command_runner.go:130] > # 	"image_pulls_successes",
	I0115 11:13:10.265316 1693723 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0115 11:13:10.265327 1693723 command_runner.go:130] > # 	"image_layer_reuse",
	I0115 11:13:10.265333 1693723 command_runner.go:130] > # 	"containers_oom_total",
	I0115 11:13:10.265340 1693723 command_runner.go:130] > # 	"containers_oom",
	I0115 11:13:10.265346 1693723 command_runner.go:130] > # 	"processes_defunct",
	I0115 11:13:10.265351 1693723 command_runner.go:130] > # 	"operations_total",
	I0115 11:13:10.265367 1693723 command_runner.go:130] > # 	"operations_latency_seconds",
	I0115 11:13:10.265377 1693723 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0115 11:13:10.265401 1693723 command_runner.go:130] > # 	"operations_errors_total",
	I0115 11:13:10.265415 1693723 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0115 11:13:10.265421 1693723 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0115 11:13:10.265438 1693723 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0115 11:13:10.265456 1693723 command_runner.go:130] > # 	"image_pulls_success_total",
	I0115 11:13:10.265462 1693723 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0115 11:13:10.265473 1693723 command_runner.go:130] > # 	"containers_oom_count_total",
	I0115 11:13:10.265477 1693723 command_runner.go:130] > # ]
	I0115 11:13:10.265483 1693723 command_runner.go:130] > # The port on which the metrics server will listen.
	I0115 11:13:10.265491 1693723 command_runner.go:130] > # metrics_port = 9090
	I0115 11:13:10.265497 1693723 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0115 11:13:10.265502 1693723 command_runner.go:130] > # metrics_socket = ""
	I0115 11:13:10.265518 1693723 command_runner.go:130] > # The certificate for the secure metrics server.
	I0115 11:13:10.265534 1693723 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0115 11:13:10.265551 1693723 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0115 11:13:10.265568 1693723 command_runner.go:130] > # certificate on any modification event.
	I0115 11:13:10.265578 1693723 command_runner.go:130] > # metrics_cert = ""
	I0115 11:13:10.265586 1693723 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0115 11:13:10.265596 1693723 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0115 11:13:10.265601 1693723 command_runner.go:130] > # metrics_key = ""
	I0115 11:13:10.265608 1693723 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0115 11:13:10.265615 1693723 command_runner.go:130] > [crio.tracing]
	I0115 11:13:10.265632 1693723 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0115 11:13:10.265645 1693723 command_runner.go:130] > # enable_tracing = false
	I0115 11:13:10.265653 1693723 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0115 11:13:10.265658 1693723 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0115 11:13:10.265675 1693723 command_runner.go:130] > # Number of samples to collect per million spans.
	I0115 11:13:10.265689 1693723 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0115 11:13:10.265698 1693723 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0115 11:13:10.265707 1693723 command_runner.go:130] > [crio.stats]
	I0115 11:13:10.265714 1693723 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0115 11:13:10.265724 1693723 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0115 11:13:10.265911 1693723 command_runner.go:130] > # stats_collection_period = 0
	I0115 11:13:10.267755 1693723 command_runner.go:130] ! time="2024-01-15 11:13:10.249303198Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0115 11:13:10.267786 1693723 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0115 11:13:10.268007 1693723 cni.go:84] Creating CNI manager for ""
	I0115 11:13:10.268020 1693723 cni.go:136] 1 nodes found, recommending kindnet
	I0115 11:13:10.268061 1693723 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 11:13:10.268090 1693723 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-279658 NodeName:multinode-279658 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 11:13:10.268309 1693723 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-279658"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 11:13:10.268405 1693723 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-279658 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-279658 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 11:13:10.268606 1693723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 11:13:10.280177 1693723 command_runner.go:130] > kubeadm
	I0115 11:13:10.280199 1693723 command_runner.go:130] > kubectl
	I0115 11:13:10.280204 1693723 command_runner.go:130] > kubelet
	I0115 11:13:10.280245 1693723 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 11:13:10.280338 1693723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 11:13:10.291502 1693723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0115 11:13:10.312891 1693723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 11:13:10.334502 1693723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0115 11:13:10.355970 1693723 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0115 11:13:10.360589 1693723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:13:10.373991 1693723 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658 for IP: 192.168.58.2
	I0115 11:13:10.374028 1693723 certs.go:190] acquiring lock for shared ca certs: {Name:mk2a63925baba8534769a012921a3873667cd449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:13:10.374209 1693723 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key
	I0115 11:13:10.374254 1693723 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key
	I0115 11:13:10.374368 1693723 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.key
	I0115 11:13:10.374383 1693723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.crt with IP's: []
	I0115 11:13:11.073922 1693723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.crt ...
	I0115 11:13:11.073956 1693723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.crt: {Name:mke4c37a87077c2ccdb5490720b47221165cba1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:13:11.074162 1693723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.key ...
	I0115 11:13:11.074175 1693723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.key: {Name:mk8890512f25160cfc269d98fa5e25afbefb2b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:13:11.074297 1693723 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.key.cee25041
	I0115 11:13:11.074315 1693723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 11:13:12.288716 1693723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.crt.cee25041 ...
	I0115 11:13:12.288758 1693723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.crt.cee25041: {Name:mkefd9f25b10830263c8bcbf954885e2398a5f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:13:12.288948 1693723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.key.cee25041 ...
	I0115 11:13:12.288959 1693723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.key.cee25041: {Name:mk1d1730a7c2919ecfee35b73b6e31ca3b1f91e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:13:12.289064 1693723 certs.go:337] copying /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.crt
	I0115 11:13:12.289143 1693723 certs.go:341] copying /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.key
	I0115 11:13:12.289206 1693723 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.key
	I0115 11:13:12.289223 1693723 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.crt with IP's: []
	I0115 11:13:13.007438 1693723 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.crt ...
	I0115 11:13:13.007472 1693723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.crt: {Name:mk08eb1841eef3c698d3210aa41d9ca504a44ad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:13:13.007668 1693723 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.key ...
	I0115 11:13:13.007684 1693723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.key: {Name:mkce42ca6627b25362b838011c2e87b31263acb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:13:13.007770 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 11:13:13.007794 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 11:13:13.007807 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 11:13:13.007821 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 11:13:13.007839 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 11:13:13.007856 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 11:13:13.007872 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 11:13:13.007886 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 11:13:13.007944 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435.pem (1338 bytes)
	W0115 11:13:13.007990 1693723 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435_empty.pem, impossibly tiny 0 bytes
	I0115 11:13:13.008005 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 11:13:13.008032 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem (1082 bytes)
	I0115 11:13:13.008065 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem (1123 bytes)
	I0115 11:13:13.008087 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem (1675 bytes)
	I0115 11:13:13.008141 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem (1708 bytes)
	I0115 11:13:13.008167 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435.pem -> /usr/share/ca-certificates/1630435.pem
	I0115 11:13:13.008181 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem -> /usr/share/ca-certificates/16304352.pem
	I0115 11:13:13.008197 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:13:13.008782 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 11:13:13.038376 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 11:13:13.067510 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 11:13:13.097880 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 11:13:13.127377 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 11:13:13.156283 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 11:13:13.184168 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 11:13:13.212155 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 11:13:13.240417 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435.pem --> /usr/share/ca-certificates/1630435.pem (1338 bytes)
	I0115 11:13:13.269434 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem --> /usr/share/ca-certificates/16304352.pem (1708 bytes)
	I0115 11:13:13.299049 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 11:13:13.328309 1693723 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 11:13:13.349940 1693723 ssh_runner.go:195] Run: openssl version
	I0115 11:13:13.359309 1693723 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0115 11:13:13.359751 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 11:13:13.372013 1693723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:13:13.376506 1693723 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 15 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:13:13.376540 1693723 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:13:13.376589 1693723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:13:13.385236 1693723 command_runner.go:130] > b5213941
	I0115 11:13:13.385742 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 11:13:13.397513 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1630435.pem && ln -fs /usr/share/ca-certificates/1630435.pem /etc/ssl/certs/1630435.pem"
	I0115 11:13:13.409029 1693723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1630435.pem
	I0115 11:13:13.413551 1693723 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 15 10:58 /usr/share/ca-certificates/1630435.pem
	I0115 11:13:13.413590 1693723 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 10:58 /usr/share/ca-certificates/1630435.pem
	I0115 11:13:13.413640 1693723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1630435.pem
	I0115 11:13:13.421722 1693723 command_runner.go:130] > 51391683
	I0115 11:13:13.422125 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1630435.pem /etc/ssl/certs/51391683.0"
	I0115 11:13:13.434128 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16304352.pem && ln -fs /usr/share/ca-certificates/16304352.pem /etc/ssl/certs/16304352.pem"
	I0115 11:13:13.445616 1693723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16304352.pem
	I0115 11:13:13.449916 1693723 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 15 10:58 /usr/share/ca-certificates/16304352.pem
	I0115 11:13:13.449946 1693723 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 10:58 /usr/share/ca-certificates/16304352.pem
	I0115 11:13:13.449994 1693723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16304352.pem
	I0115 11:13:13.458003 1693723 command_runner.go:130] > 3ec20f2e
	I0115 11:13:13.458500 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16304352.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 11:13:13.469721 1693723 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 11:13:13.473912 1693723 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 11:13:13.473948 1693723 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 11:13:13.473985 1693723 kubeadm.go:404] StartCluster: {Name:multinode-279658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-279658 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:13:13.474058 1693723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0115 11:13:13.474112 1693723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 11:13:13.527203 1693723 cri.go:89] found id: ""
	I0115 11:13:13.527274 1693723 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 11:13:13.536680 1693723 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0115 11:13:13.536703 1693723 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0115 11:13:13.536712 1693723 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0115 11:13:13.537992 1693723 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 11:13:13.549105 1693723 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 11:13:13.549203 1693723 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 11:13:13.560381 1693723 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0115 11:13:13.560445 1693723 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0115 11:13:13.560461 1693723 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0115 11:13:13.560471 1693723 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 11:13:13.560502 1693723 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 11:13:13.560538 1693723 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 11:13:13.617657 1693723 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 11:13:13.617688 1693723 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0115 11:13:13.617979 1693723 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 11:13:13.618002 1693723 command_runner.go:130] > [preflight] Running pre-flight checks
	I0115 11:13:13.665890 1693723 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 11:13:13.665922 1693723 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0115 11:13:13.665975 1693723 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0115 11:13:13.665987 1693723 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0115 11:13:13.666019 1693723 kubeadm.go:322] OS: Linux
	I0115 11:13:13.666028 1693723 command_runner.go:130] > OS: Linux
	I0115 11:13:13.666073 1693723 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 11:13:13.666084 1693723 command_runner.go:130] > CGROUPS_CPU: enabled
	I0115 11:13:13.666137 1693723 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 11:13:13.666149 1693723 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0115 11:13:13.666193 1693723 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 11:13:13.666203 1693723 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0115 11:13:13.666247 1693723 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 11:13:13.666256 1693723 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0115 11:13:13.666315 1693723 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 11:13:13.666325 1693723 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0115 11:13:13.666371 1693723 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 11:13:13.666381 1693723 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0115 11:13:13.666423 1693723 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0115 11:13:13.666433 1693723 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0115 11:13:13.666477 1693723 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0115 11:13:13.666486 1693723 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0115 11:13:13.666534 1693723 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0115 11:13:13.666545 1693723 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0115 11:13:13.749579 1693723 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 11:13:13.749612 1693723 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 11:13:13.749701 1693723 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 11:13:13.749711 1693723 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 11:13:13.749833 1693723 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 11:13:13.749846 1693723 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 11:13:13.997516 1693723 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 11:13:14.000661 1693723 out.go:204]   - Generating certificates and keys ...
	I0115 11:13:13.997590 1693723 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 11:13:14.000858 1693723 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 11:13:14.000898 1693723 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0115 11:13:14.001005 1693723 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 11:13:14.001038 1693723 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0115 11:13:14.461926 1693723 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 11:13:14.461956 1693723 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 11:13:15.013389 1693723 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 11:13:15.013426 1693723 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0115 11:13:15.389397 1693723 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 11:13:15.389426 1693723 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0115 11:13:15.716983 1693723 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 11:13:15.717015 1693723 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0115 11:13:16.245166 1693723 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 11:13:16.245195 1693723 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0115 11:13:16.245317 1693723 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-279658] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0115 11:13:16.245325 1693723 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-279658] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0115 11:13:16.723017 1693723 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 11:13:16.723044 1693723 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0115 11:13:16.723164 1693723 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-279658] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0115 11:13:16.723170 1693723 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-279658] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0115 11:13:17.007904 1693723 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 11:13:17.007931 1693723 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 11:13:17.533181 1693723 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 11:13:17.533207 1693723 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 11:13:18.555358 1693723 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 11:13:18.555400 1693723 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0115 11:13:18.555712 1693723 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 11:13:18.555727 1693723 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 11:13:19.320563 1693723 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 11:13:19.320596 1693723 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 11:13:19.575027 1693723 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 11:13:19.575059 1693723 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 11:13:20.204483 1693723 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 11:13:20.204517 1693723 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 11:13:20.443933 1693723 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 11:13:20.443963 1693723 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 11:13:20.444704 1693723 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 11:13:20.444725 1693723 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 11:13:20.448991 1693723 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 11:13:20.451408 1693723 out.go:204]   - Booting up control plane ...
	I0115 11:13:20.449092 1693723 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 11:13:20.451509 1693723 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 11:13:20.451527 1693723 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 11:13:20.451604 1693723 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 11:13:20.451614 1693723 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 11:13:20.452363 1693723 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 11:13:20.452386 1693723 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 11:13:20.463135 1693723 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 11:13:20.463159 1693723 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 11:13:20.464089 1693723 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 11:13:20.464108 1693723 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 11:13:20.464152 1693723 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 11:13:20.464164 1693723 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0115 11:13:20.566880 1693723 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 11:13:20.566893 1693723 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 11:13:27.570128 1693723 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003353 seconds
	I0115 11:13:27.570152 1693723 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003353 seconds
	I0115 11:13:27.570251 1693723 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 11:13:27.570257 1693723 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 11:13:27.588677 1693723 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 11:13:27.588705 1693723 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 11:13:28.119198 1693723 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 11:13:28.119224 1693723 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0115 11:13:28.119399 1693723 kubeadm.go:322] [mark-control-plane] Marking the node multinode-279658 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 11:13:28.119405 1693723 command_runner.go:130] > [mark-control-plane] Marking the node multinode-279658 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 11:13:28.635148 1693723 kubeadm.go:322] [bootstrap-token] Using token: uh2aw3.cjpo1n6rsmpwevxn
	I0115 11:13:28.637461 1693723 out.go:204]   - Configuring RBAC rules ...
	I0115 11:13:28.635260 1693723 command_runner.go:130] > [bootstrap-token] Using token: uh2aw3.cjpo1n6rsmpwevxn
	I0115 11:13:28.637589 1693723 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 11:13:28.637600 1693723 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 11:13:28.644847 1693723 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 11:13:28.644869 1693723 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 11:13:28.654889 1693723 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 11:13:28.654918 1693723 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 11:13:28.659307 1693723 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 11:13:28.659336 1693723 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 11:13:28.664573 1693723 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 11:13:28.664598 1693723 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 11:13:28.670133 1693723 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 11:13:28.670158 1693723 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 11:13:28.685288 1693723 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 11:13:28.685312 1693723 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 11:13:28.946517 1693723 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 11:13:28.946540 1693723 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0115 11:13:29.109922 1693723 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 11:13:29.109948 1693723 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0115 11:13:29.109955 1693723 kubeadm.go:322] 
	I0115 11:13:29.110011 1693723 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 11:13:29.110021 1693723 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0115 11:13:29.110026 1693723 kubeadm.go:322] 
	I0115 11:13:29.110098 1693723 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 11:13:29.110107 1693723 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0115 11:13:29.110112 1693723 kubeadm.go:322] 
	I0115 11:13:29.110136 1693723 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 11:13:29.110145 1693723 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0115 11:13:29.110200 1693723 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 11:13:29.110216 1693723 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 11:13:29.110267 1693723 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 11:13:29.110289 1693723 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 11:13:29.110295 1693723 kubeadm.go:322] 
	I0115 11:13:29.110349 1693723 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 11:13:29.110357 1693723 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0115 11:13:29.110362 1693723 kubeadm.go:322] 
	I0115 11:13:29.110407 1693723 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 11:13:29.110415 1693723 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 11:13:29.110420 1693723 kubeadm.go:322] 
	I0115 11:13:29.110474 1693723 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 11:13:29.110482 1693723 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0115 11:13:29.110551 1693723 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 11:13:29.110564 1693723 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 11:13:29.110628 1693723 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 11:13:29.110636 1693723 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 11:13:29.110640 1693723 kubeadm.go:322] 
	I0115 11:13:29.110725 1693723 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 11:13:29.110734 1693723 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0115 11:13:29.110805 1693723 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 11:13:29.110813 1693723 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0115 11:13:29.110818 1693723 kubeadm.go:322] 
	I0115 11:13:29.110896 1693723 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uh2aw3.cjpo1n6rsmpwevxn \
	I0115 11:13:29.110904 1693723 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token uh2aw3.cjpo1n6rsmpwevxn \
	I0115 11:13:29.110999 1693723 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 \
	I0115 11:13:29.111008 1693723 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 \
	I0115 11:13:29.111029 1693723 kubeadm.go:322] 	--control-plane 
	I0115 11:13:29.111037 1693723 command_runner.go:130] > 	--control-plane 
	I0115 11:13:29.111042 1693723 kubeadm.go:322] 
	I0115 11:13:29.111121 1693723 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 11:13:29.111130 1693723 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0115 11:13:29.111134 1693723 kubeadm.go:322] 
	I0115 11:13:29.111211 1693723 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uh2aw3.cjpo1n6rsmpwevxn \
	I0115 11:13:29.111220 1693723 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token uh2aw3.cjpo1n6rsmpwevxn \
	I0115 11:13:29.111314 1693723 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 
	I0115 11:13:29.111322 1693723 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 
	I0115 11:13:29.112068 1693723 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0115 11:13:29.112087 1693723 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0115 11:13:29.112185 1693723 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 11:13:29.112196 1693723 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 11:13:29.112209 1693723 cni.go:84] Creating CNI manager for ""
	I0115 11:13:29.112215 1693723 cni.go:136] 1 nodes found, recommending kindnet
	I0115 11:13:29.114967 1693723 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 11:13:29.117463 1693723 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 11:13:29.128152 1693723 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0115 11:13:29.128183 1693723 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0115 11:13:29.128192 1693723 command_runner.go:130] > Device: 3ah/58d	Inode: 1826992     Links: 1
	I0115 11:13:29.128199 1693723 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 11:13:29.128206 1693723 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0115 11:13:29.128214 1693723 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0115 11:13:29.128224 1693723 command_runner.go:130] > Change: 2024-01-15 10:51:11.139562617 +0000
	I0115 11:13:29.128230 1693723 command_runner.go:130] >  Birth: 2024-01-15 10:51:11.091563836 +0000
	I0115 11:13:29.131547 1693723 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 11:13:29.131564 1693723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 11:13:29.181699 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 11:13:30.048371 1693723 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0115 11:13:30.058126 1693723 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0115 11:13:30.068346 1693723 command_runner.go:130] > serviceaccount/kindnet created
	I0115 11:13:30.084669 1693723 command_runner.go:130] > daemonset.apps/kindnet created
	I0115 11:13:30.091393 1693723 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 11:13:30.091449 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:30.091545 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=multinode-279658 minikube.k8s.io/updated_at=2024_01_15T11_13_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:30.294786 1693723 command_runner.go:130] > node/multinode-279658 labeled
	I0115 11:13:30.298416 1693723 command_runner.go:130] > -16
	I0115 11:13:30.298448 1693723 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0115 11:13:30.298470 1693723 ops.go:34] apiserver oom_adj: -16
	I0115 11:13:30.298538 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:30.407026 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:30.799572 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:30.897248 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:31.298747 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:31.393761 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:31.799211 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:31.891536 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:32.299129 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:32.401539 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:32.799143 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:32.891047 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:33.299662 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:33.386203 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:33.798706 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:33.889909 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:34.298606 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:34.393277 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:34.798693 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:34.892883 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:35.299466 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:35.395708 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:35.799258 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:35.891187 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:36.298747 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:36.398370 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:36.798684 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:36.889647 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:37.299143 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:37.393160 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:37.799475 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:37.890810 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:38.299406 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:38.387107 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:38.799367 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:38.893102 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:39.299422 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:39.390914 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:39.799524 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:39.883510 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:40.299283 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:40.390500 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:40.799582 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:40.893158 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:41.298803 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:41.390508 1693723 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0115 11:13:41.799198 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:13:41.972594 1693723 command_runner.go:130] > NAME      SECRETS   AGE
	I0115 11:13:41.972614 1693723 command_runner.go:130] > default   0         0s
	I0115 11:13:41.975687 1693723 kubeadm.go:1088] duration metric: took 11.884305751s to wait for elevateKubeSystemPrivileges.
	I0115 11:13:41.975713 1693723 kubeadm.go:406] StartCluster complete in 28.501731437s
	I0115 11:13:41.975730 1693723 settings.go:142] acquiring lock: {Name:mk05555b5306114ae6571475ccb387a5354ea318 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:13:41.975793 1693723 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:13:41.976556 1693723 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/kubeconfig: {Name:mk8fd98ab18475cc98d08290957f6662a0acdd86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:13:41.977066 1693723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:13:41.977317 1693723 kapi.go:59] client config for multinode-279658: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.key", CAFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9dd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 11:13:41.978597 1693723 config.go:182] Loaded profile config "multinode-279658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:13:41.978650 1693723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 11:13:41.978842 1693723 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 11:13:41.978916 1693723 addons.go:69] Setting storage-provisioner=true in profile "multinode-279658"
	I0115 11:13:41.978932 1693723 addons.go:234] Setting addon storage-provisioner=true in "multinode-279658"
	I0115 11:13:41.978968 1693723 host.go:66] Checking if "multinode-279658" exists ...
	I0115 11:13:41.979414 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658 --format={{.State.Status}}
	I0115 11:13:41.979766 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 11:13:41.979808 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:41.979832 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:41.979854 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:41.980093 1693723 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 11:13:41.980558 1693723 addons.go:69] Setting default-storageclass=true in profile "multinode-279658"
	I0115 11:13:41.980596 1693723 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-279658"
	I0115 11:13:41.980932 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658 --format={{.State.Status}}
	I0115 11:13:42.019919 1693723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:13:42.020268 1693723 kapi.go:59] client config for multinode-279658: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.key", CAFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9dd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 11:13:42.020566 1693723 addons.go:234] Setting addon default-storageclass=true in "multinode-279658"
	I0115 11:13:42.020605 1693723 host.go:66] Checking if "multinode-279658" exists ...
	I0115 11:13:42.026235 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658 --format={{.State.Status}}
	I0115 11:13:42.050338 1693723 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 11:13:42.052815 1693723 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 11:13:42.052842 1693723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 11:13:42.052913 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:42.073538 1693723 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 11:13:42.073562 1693723 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 11:13:42.073632 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:13:42.115177 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34794 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa Username:docker}
	I0115 11:13:42.136057 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34794 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa Username:docker}
	I0115 11:13:42.141727 1693723 round_trippers.go:574] Response Status: 200 OK in 161 milliseconds
	I0115 11:13:42.141752 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:42.141762 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:42.141768 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:42.141777 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:42.141783 1693723 round_trippers.go:580]     Content-Length: 291
	I0115 11:13:42.141790 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:42 GMT
	I0115 11:13:42.141797 1693723 round_trippers.go:580]     Audit-Id: c7d70138-eb1f-4cc8-83be-2e231d64517f
	I0115 11:13:42.141803 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:42.178918 1693723 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ce1f76ee-403c-4b9e-85a1-54036c2cd680","resourceVersion":"349","creationTimestamp":"2024-01-15T11:13:28Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0115 11:13:42.179394 1693723 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ce1f76ee-403c-4b9e-85a1-54036c2cd680","resourceVersion":"349","creationTimestamp":"2024-01-15T11:13:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0115 11:13:42.179470 1693723 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 11:13:42.179480 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:42.179490 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:42.179498 1693723 round_trippers.go:473]     Content-Type: application/json
	I0115 11:13:42.179507 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:42.232650 1693723 round_trippers.go:574] Response Status: 409 Conflict in 53 milliseconds
	I0115 11:13:42.232731 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:42.232753 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:42.232776 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:42.232810 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:42.232838 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:42.232864 1693723 round_trippers.go:580]     Content-Length: 332
	I0115 11:13:42.232899 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:42 GMT
	I0115 11:13:42.232925 1693723 round_trippers.go:580]     Audit-Id: 20c4f5ef-a740-4be0-b8dc-f5617982a63d
	I0115 11:13:42.240390 1693723 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again","reason":"Conflict","details":{"name":"coredns","group":"apps","kind":"deployments"},"code":409}
	W0115 11:13:42.240732 1693723 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "multinode-279658" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0115 11:13:42.240788 1693723 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0115 11:13:42.240829 1693723 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0115 11:13:42.245200 1693723 out.go:177] * Verifying Kubernetes components...
	I0115 11:13:42.247894 1693723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:13:42.310988 1693723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 11:13:42.345680 1693723 command_runner.go:130] > apiVersion: v1
	I0115 11:13:42.345700 1693723 command_runner.go:130] > data:
	I0115 11:13:42.345705 1693723 command_runner.go:130] >   Corefile: |
	I0115 11:13:42.345710 1693723 command_runner.go:130] >     .:53 {
	I0115 11:13:42.345715 1693723 command_runner.go:130] >         errors
	I0115 11:13:42.345721 1693723 command_runner.go:130] >         health {
	I0115 11:13:42.345729 1693723 command_runner.go:130] >            lameduck 5s
	I0115 11:13:42.345735 1693723 command_runner.go:130] >         }
	I0115 11:13:42.345740 1693723 command_runner.go:130] >         ready
	I0115 11:13:42.345748 1693723 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0115 11:13:42.345754 1693723 command_runner.go:130] >            pods insecure
	I0115 11:13:42.345762 1693723 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0115 11:13:42.345768 1693723 command_runner.go:130] >            ttl 30
	I0115 11:13:42.345773 1693723 command_runner.go:130] >         }
	I0115 11:13:42.345778 1693723 command_runner.go:130] >         prometheus :9153
	I0115 11:13:42.345784 1693723 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0115 11:13:42.345789 1693723 command_runner.go:130] >            max_concurrent 1000
	I0115 11:13:42.345793 1693723 command_runner.go:130] >         }
	I0115 11:13:42.345798 1693723 command_runner.go:130] >         cache 30
	I0115 11:13:42.345803 1693723 command_runner.go:130] >         loop
	I0115 11:13:42.345808 1693723 command_runner.go:130] >         reload
	I0115 11:13:42.345813 1693723 command_runner.go:130] >         loadbalance
	I0115 11:13:42.345819 1693723 command_runner.go:130] >     }
	I0115 11:13:42.345824 1693723 command_runner.go:130] > kind: ConfigMap
	I0115 11:13:42.345828 1693723 command_runner.go:130] > metadata:
	I0115 11:13:42.345835 1693723 command_runner.go:130] >   creationTimestamp: "2024-01-15T11:13:28Z"
	I0115 11:13:42.345841 1693723 command_runner.go:130] >   name: coredns
	I0115 11:13:42.345853 1693723 command_runner.go:130] >   namespace: kube-system
	I0115 11:13:42.345859 1693723 command_runner.go:130] >   resourceVersion: "266"
	I0115 11:13:42.345865 1693723 command_runner.go:130] >   uid: a01e9e61-3152-4398-9426-deb303e2e4d4
	I0115 11:13:42.347325 1693723 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 11:13:42.347851 1693723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:13:42.348242 1693723 kapi.go:59] client config for multinode-279658: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.key", CAFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9dd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 11:13:42.348636 1693723 node_ready.go:35] waiting up to 6m0s for node "multinode-279658" to be "Ready" ...
	I0115 11:13:42.348788 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:42.348818 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:42.348851 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:42.348874 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:42.352998 1693723 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 11:13:42.409159 1693723 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0115 11:13:42.409247 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:42.409270 1693723 round_trippers.go:580]     Audit-Id: 55239369-e8ab-4ecc-ae58-7a982064e9f2
	I0115 11:13:42.409292 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:42.409324 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:42.409353 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:42.409374 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:42.409411 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:42 GMT
	I0115 11:13:42.447132 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:42.849485 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:42.849567 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:42.849600 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:42.849621 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:42.867924 1693723 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0115 11:13:42.867997 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:42.868049 1693723 round_trippers.go:580]     Audit-Id: a6abc9a2-5c8c-410b-b306-f666dfffb283
	I0115 11:13:42.868083 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:42.868108 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:42.868130 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:42.868167 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:42.868191 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:42 GMT
	I0115 11:13:42.868370 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:43.255869 1693723 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0115 11:13:43.264544 1693723 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0115 11:13:43.273694 1693723 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0115 11:13:43.283021 1693723 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0115 11:13:43.293778 1693723 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0115 11:13:43.306420 1693723 command_runner.go:130] > pod/storage-provisioner created
	I0115 11:13:43.312625 1693723 command_runner.go:130] > configmap/coredns replaced
	I0115 11:13:43.312703 1693723 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0115 11:13:43.312736 1693723 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.001584274s)
	I0115 11:13:43.312799 1693723 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0115 11:13:43.312969 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0115 11:13:43.312995 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:43.313025 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:43.313047 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:43.320324 1693723 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0115 11:13:43.320393 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:43.320416 1693723 round_trippers.go:580]     Audit-Id: a59ea697-ea9a-4b41-8e81-2804bb25781f
	I0115 11:13:43.320440 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:43.320477 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:43.320502 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:43.320523 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:43.320558 1693723 round_trippers.go:580]     Content-Length: 1273
	I0115 11:13:43.320592 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:43 GMT
	I0115 11:13:43.320810 1693723 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"de93abe8-042e-4c8f-8a73-44160898671c","resourceVersion":"386","creationTimestamp":"2024-01-15T11:13:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-15T11:13:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0115 11:13:43.321292 1693723 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de93abe8-042e-4c8f-8a73-44160898671c","resourceVersion":"386","creationTimestamp":"2024-01-15T11:13:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-15T11:13:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0115 11:13:43.321376 1693723 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0115 11:13:43.321410 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:43.321435 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:43.321456 1693723 round_trippers.go:473]     Content-Type: application/json
	I0115 11:13:43.321492 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:43.327466 1693723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 11:13:43.327540 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:43.327562 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:43.327581 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:43.327614 1693723 round_trippers.go:580]     Content-Length: 1220
	I0115 11:13:43.327637 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:43 GMT
	I0115 11:13:43.327659 1693723 round_trippers.go:580]     Audit-Id: f622cf7b-a860-47a4-8df7-5a24f29f80d9
	I0115 11:13:43.327693 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:43.327718 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:43.327773 1693723 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"de93abe8-042e-4c8f-8a73-44160898671c","resourceVersion":"386","creationTimestamp":"2024-01-15T11:13:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-15T11:13:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0115 11:13:43.330924 1693723 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0115 11:13:43.333585 1693723 addons.go:505] enable addons completed in 1.354735392s: enabled=[storage-provisioner default-storageclass]
	I0115 11:13:43.349431 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:43.349455 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:43.349464 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:43.349472 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:43.352103 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:43.352131 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:43.352139 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:43.352147 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:43 GMT
	I0115 11:13:43.352153 1693723 round_trippers.go:580]     Audit-Id: ffeea503-91b4-49c8-8604-38b0765806be
	I0115 11:13:43.352160 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:43.352166 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:43.352173 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:43.352497 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:43.848906 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:43.848943 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:43.848954 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:43.848962 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:43.851557 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:43.851622 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:43.851646 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:43.851668 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:43.851703 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:43 GMT
	I0115 11:13:43.851719 1693723 round_trippers.go:580]     Audit-Id: d8eaf468-0f8d-4d2f-b5c7-9c7b162a494b
	I0115 11:13:43.851726 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:43.851732 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:43.851860 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:44.348938 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:44.348965 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:44.348975 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:44.348983 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:44.351624 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:44.351663 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:44.351672 1693723 round_trippers.go:580]     Audit-Id: 17e65f8e-3aea-4f23-aa73-7ce80390ac74
	I0115 11:13:44.351679 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:44.351685 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:44.351691 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:44.351698 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:44.351705 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:44 GMT
	I0115 11:13:44.352051 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:44.352443 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:13:44.849156 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:44.849177 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:44.849187 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:44.849194 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:44.851961 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:44.852022 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:44.852044 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:44 GMT
	I0115 11:13:44.852066 1693723 round_trippers.go:580]     Audit-Id: 26ec1963-25c1-4f5a-836a-71dd2d35f2a7
	I0115 11:13:44.852099 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:44.852122 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:44.852143 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:44.852165 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:44.852310 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:45.349031 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:45.349090 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:45.349152 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:45.349178 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:45.352072 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:45.352096 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:45.352150 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:45.352158 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:45.352165 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:45.352171 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:45 GMT
	I0115 11:13:45.352177 1693723 round_trippers.go:580]     Audit-Id: 0ff4b021-b8d8-4635-8c28-c474bc7d8e58
	I0115 11:13:45.352183 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:45.352325 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:45.849821 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:45.849849 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:45.849861 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:45.849868 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:45.852634 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:45.852658 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:45.852667 1693723 round_trippers.go:580]     Audit-Id: 05a5f160-9101-4309-9911-bdace4dbecf8
	I0115 11:13:45.852703 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:45.852710 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:45.852724 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:45.852733 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:45.852739 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:45 GMT
	I0115 11:13:45.852901 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:46.349191 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:46.349217 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:46.349227 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:46.349234 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:46.351775 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:46.351799 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:46.351809 1693723 round_trippers.go:580]     Audit-Id: d339a8a3-0430-47aa-87fa-d6d889bbf2bc
	I0115 11:13:46.351816 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:46.351823 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:46.351870 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:46.351884 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:46.351914 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:46 GMT
	I0115 11:13:46.352388 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:46.352784 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:13:46.848915 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:46.848941 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:46.848956 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:46.848963 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:46.851673 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:46.851693 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:46.851702 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:46 GMT
	I0115 11:13:46.851709 1693723 round_trippers.go:580]     Audit-Id: 844e6b44-d528-4271-8a0f-e7a44b29149c
	I0115 11:13:46.851717 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:46.851723 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:46.851729 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:46.851736 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:46.851834 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:47.349767 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:47.349793 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:47.349804 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:47.349811 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:47.352541 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:47.352568 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:47.352577 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:47.352588 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:47.352595 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:47.352604 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:47 GMT
	I0115 11:13:47.352613 1693723 round_trippers.go:580]     Audit-Id: acfd8a15-05c6-4427-92c1-3fd2e5d677a7
	I0115 11:13:47.352628 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:47.352782 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:47.848928 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:47.848952 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:47.848962 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:47.848969 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:47.851494 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:47.851519 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:47.851528 1693723 round_trippers.go:580]     Audit-Id: 8997ed13-4051-42c3-88f0-5a06544f4142
	I0115 11:13:47.851534 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:47.851540 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:47.851546 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:47.851567 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:47.851577 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:47 GMT
	I0115 11:13:47.851695 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:48.349867 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:48.349896 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:48.349905 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:48.349913 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:48.352575 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:48.352640 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:48.352689 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:48.352715 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:48.352730 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:48 GMT
	I0115 11:13:48.352737 1693723 round_trippers.go:580]     Audit-Id: 001791cc-76e5-435c-a9a2-0cce2ee5d72d
	I0115 11:13:48.352756 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:48.352764 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:48.352938 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:48.353330 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:13:48.849337 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:48.849361 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:48.849371 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:48.849382 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:48.851902 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:48.851925 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:48.851934 1693723 round_trippers.go:580]     Audit-Id: 05fb2d41-6a63-422d-81cb-50fba9a13b4d
	I0115 11:13:48.851940 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:48.851946 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:48.851953 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:48.851967 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:48.851978 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:48 GMT
	I0115 11:13:48.852169 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:49.349833 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:49.349860 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:49.349870 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:49.349877 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:49.352426 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:49.352446 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:49.352455 1693723 round_trippers.go:580]     Audit-Id: 7f8bb75f-8681-47af-b82e-8ca2eb6b1dd3
	I0115 11:13:49.352461 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:49.352467 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:49.352474 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:49.352480 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:49.352486 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:49 GMT
	I0115 11:13:49.352643 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:49.849818 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:49.849843 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:49.849853 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:49.849864 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:49.852397 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:49.852424 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:49.852434 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:49 GMT
	I0115 11:13:49.852514 1693723 round_trippers.go:580]     Audit-Id: 54b9ec72-6939-408b-affc-fec091c6481c
	I0115 11:13:49.852526 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:49.852533 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:49.852541 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:49.852548 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:49.852667 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:50.349384 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:50.349408 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:50.349418 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:50.349425 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:50.352023 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:50.352052 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:50.352060 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:50.352067 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:50.352073 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:50.352080 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:50 GMT
	I0115 11:13:50.352087 1693723 round_trippers.go:580]     Audit-Id: 10912381-efad-40df-a4e6-52383ac0982e
	I0115 11:13:50.352097 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:50.352186 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:50.849276 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:50.849302 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:50.849311 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:50.849319 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:50.851873 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:50.851895 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:50.851904 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:50.851910 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:50.851916 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:50.851923 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:50.851930 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:50 GMT
	I0115 11:13:50.851936 1693723 round_trippers.go:580]     Audit-Id: 706b73d4-8785-4e2b-b553-33ef83dce1eb
	I0115 11:13:50.852026 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:50.852399 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:13:51.348933 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:51.348957 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:51.348967 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:51.348974 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:51.351512 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:51.351545 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:51.351555 1693723 round_trippers.go:580]     Audit-Id: f4068974-afc6-40f1-b36b-9872d5d74aef
	I0115 11:13:51.351562 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:51.351569 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:51.351575 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:51.351584 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:51.351590 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:51 GMT
	I0115 11:13:51.351734 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:51.849398 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:51.849420 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:51.849430 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:51.849437 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:51.851905 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:51.851926 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:51.851935 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:51.851941 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:51.851948 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:51 GMT
	I0115 11:13:51.851954 1693723 round_trippers.go:580]     Audit-Id: a08a24da-2b7d-4e8d-96dd-b8f242142357
	I0115 11:13:51.851961 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:51.851967 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:51.852076 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:52.349848 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:52.349889 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:52.349900 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:52.349907 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:52.352328 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:52.352350 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:52.352359 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:52.352365 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:52.352371 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:52.352378 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:52.352384 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:52 GMT
	I0115 11:13:52.352390 1693723 round_trippers.go:580]     Audit-Id: 7cb53161-9915-4790-a012-d5e3ea3b0d77
	I0115 11:13:52.352495 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:52.849803 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:52.849828 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:52.849838 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:52.849846 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:52.852284 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:52.852310 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:52.852319 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:52.852325 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:52.852332 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:52 GMT
	I0115 11:13:52.852340 1693723 round_trippers.go:580]     Audit-Id: 81f49481-a0cf-415f-94cf-067a1af546e2
	I0115 11:13:52.852347 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:52.852358 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:52.852463 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:52.852849 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:13:53.349608 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:53.349631 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:53.349641 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:53.349648 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:53.352123 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:53.352147 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:53.352155 1693723 round_trippers.go:580]     Audit-Id: c5b37f77-f7e2-4167-ab1d-168ad61f7d17
	I0115 11:13:53.352161 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:53.352167 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:53.352173 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:53.352180 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:53.352190 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:53 GMT
	I0115 11:13:53.352402 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:53.848913 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:53.848938 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:53.848948 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:53.848956 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:53.851447 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:53.851472 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:53.851481 1693723 round_trippers.go:580]     Audit-Id: 78eaa28f-c3f7-44db-aba0-e21516a23dfc
	I0115 11:13:53.851496 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:53.851507 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:53.851519 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:53.851525 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:53.851532 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:53 GMT
	I0115 11:13:53.851622 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:54.348912 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:54.348936 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:54.348946 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:54.348953 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:54.351512 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:54.351533 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:54.351541 1693723 round_trippers.go:580]     Audit-Id: dd7993ee-3d81-4551-a699-a66b581f0b7b
	I0115 11:13:54.351548 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:54.351554 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:54.351560 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:54.351566 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:54.351574 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:54 GMT
	I0115 11:13:54.351701 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:54.848886 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:54.848913 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:54.848924 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:54.848931 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:54.851458 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:54.851481 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:54.851490 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:54.851496 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:54 GMT
	I0115 11:13:54.851502 1693723 round_trippers.go:580]     Audit-Id: 0f5f47f5-57f5-4213-8170-967ee1a7eba5
	I0115 11:13:54.851509 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:54.851519 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:54.851527 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:54.851656 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:55.349772 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:55.349800 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:55.349810 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:55.349818 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:55.352351 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:55.352375 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:55.352384 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:55.352390 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:55 GMT
	I0115 11:13:55.352397 1693723 round_trippers.go:580]     Audit-Id: dede3d11-5eca-4c14-ba35-ddf34bb9a917
	I0115 11:13:55.352403 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:55.352410 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:55.352416 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:55.352535 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:55.352920 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:13:55.849735 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:55.849763 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:55.849774 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:55.849781 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:55.852219 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:55.852240 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:55.852248 1693723 round_trippers.go:580]     Audit-Id: 4ba6a465-a1b6-4449-9523-1199a50b05dd
	I0115 11:13:55.852254 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:55.852260 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:55.852267 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:55.852273 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:55.852282 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:55 GMT
	I0115 11:13:55.852373 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:56.349741 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:56.349771 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:56.349781 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:56.349788 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:56.352220 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:56.352242 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:56.352251 1693723 round_trippers.go:580]     Audit-Id: b473cd47-db48-41da-9a56-1599e7082c7a
	I0115 11:13:56.352257 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:56.352265 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:56.352271 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:56.352278 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:56.352288 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:56 GMT
	I0115 11:13:56.352436 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:56.849214 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:56.849239 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:56.849249 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:56.849257 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:56.851652 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:56.851672 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:56.851680 1693723 round_trippers.go:580]     Audit-Id: a8f14e0e-1968-4719-9799-8bc6df2c19a5
	I0115 11:13:56.851686 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:56.851692 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:56.851698 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:56.851705 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:56.851711 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:56 GMT
	I0115 11:13:56.851810 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:57.349666 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:57.349695 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:57.349705 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:57.349713 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:57.352201 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:57.352225 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:57.352234 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:57 GMT
	I0115 11:13:57.352241 1693723 round_trippers.go:580]     Audit-Id: 74dbb8d0-1d51-4ece-ba96-19ddbc7e2b83
	I0115 11:13:57.352247 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:57.352253 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:57.352260 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:57.352267 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:57.352586 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:57.352979 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:13:57.849875 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:57.849915 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:57.849925 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:57.849933 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:57.852379 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:57.852399 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:57.852407 1693723 round_trippers.go:580]     Audit-Id: 84d02219-5ba4-4d6f-8403-c5e45c16ba89
	I0115 11:13:57.852414 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:57.852420 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:57.852426 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:57.852432 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:57.852443 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:57 GMT
	I0115 11:13:57.852528 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:58.349623 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:58.349649 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:58.349658 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:58.349666 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:58.352237 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:58.352262 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:58.352272 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:58.352279 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:58 GMT
	I0115 11:13:58.352285 1693723 round_trippers.go:580]     Audit-Id: 13d3a1ae-7e9c-4ffe-b369-b49d4f86d558
	I0115 11:13:58.352291 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:58.352298 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:58.352308 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:58.352452 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:58.849531 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:58.849558 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:58.849568 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:58.849576 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:58.852158 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:58.852189 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:58.852198 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:58.852204 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:58.852210 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:58.852219 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:58 GMT
	I0115 11:13:58.852225 1693723 round_trippers.go:580]     Audit-Id: 6c6ec9c2-cbdb-49b1-9b66-bbf2f4f38c0c
	I0115 11:13:58.852234 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:58.852356 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:59.349474 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:59.349503 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:59.349513 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:59.349520 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:59.352137 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:59.352165 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:59.352175 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:59.352181 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:59.352191 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:59.352198 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:59 GMT
	I0115 11:13:59.352205 1693723 round_trippers.go:580]     Audit-Id: 21d3cfca-98db-4219-966f-a15b38dafe1d
	I0115 11:13:59.352214 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:59.353053 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:13:59.353509 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:13:59.849720 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:13:59.849744 1693723 round_trippers.go:469] Request Headers:
	I0115 11:13:59.849753 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:13:59.849761 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:13:59.852182 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:13:59.852207 1693723 round_trippers.go:577] Response Headers:
	I0115 11:13:59.852216 1693723 round_trippers.go:580]     Audit-Id: 0efddb91-8db7-48f7-91e6-b4a9dc8efa4e
	I0115 11:13:59.852223 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:13:59.852229 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:13:59.852235 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:13:59.852241 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:13:59.852247 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:13:59 GMT
	I0115 11:13:59.852438 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:00.349132 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:00.349167 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:00.349183 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:00.349196 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:00.352046 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:00.352076 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:00.352085 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:00.352092 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:00.352099 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:00.352105 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:00 GMT
	I0115 11:14:00.352111 1693723 round_trippers.go:580]     Audit-Id: 1b985caf-60e5-4a99-9694-eeab73d2455c
	I0115 11:14:00.352118 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:00.352275 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:00.849470 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:00.849500 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:00.849510 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:00.849517 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:00.852051 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:00.852076 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:00.852084 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:00.852092 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:00 GMT
	I0115 11:14:00.852098 1693723 round_trippers.go:580]     Audit-Id: 7a19d32d-8f4c-4686-b9e8-1b742efb82bc
	I0115 11:14:00.852104 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:00.852110 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:00.852116 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:00.852550 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:01.348930 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:01.348958 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:01.348968 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:01.348976 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:01.351718 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:01.351740 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:01.351748 1693723 round_trippers.go:580]     Audit-Id: d4b76087-0a1c-42b6-9bca-bed51e085a3b
	I0115 11:14:01.351754 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:01.351761 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:01.351767 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:01.351773 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:01.351779 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:01 GMT
	I0115 11:14:01.351900 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:01.848930 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:01.848955 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:01.848965 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:01.848973 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:01.851406 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:01.851427 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:01.851436 1693723 round_trippers.go:580]     Audit-Id: 9784aff8-b611-4f2a-8c51-c0646593592d
	I0115 11:14:01.851443 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:01.851449 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:01.851455 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:01.851461 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:01.851468 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:01 GMT
	I0115 11:14:01.851577 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:01.851949 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:14:02.349882 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:02.349906 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:02.349917 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:02.349924 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:02.352481 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:02.352507 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:02.352517 1693723 round_trippers.go:580]     Audit-Id: 94095fd2-08d2-4004-9751-ba155215136c
	I0115 11:14:02.352524 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:02.352534 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:02.352541 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:02.352547 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:02.352559 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:02 GMT
	I0115 11:14:02.352685 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:02.848898 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:02.848921 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:02.848930 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:02.848938 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:02.851515 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:02.851541 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:02.851550 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:02.851556 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:02.851563 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:02 GMT
	I0115 11:14:02.851570 1693723 round_trippers.go:580]     Audit-Id: ab889a42-789c-4a02-bbf3-83efb398dac4
	I0115 11:14:02.851576 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:02.851583 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:02.851693 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:03.349594 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:03.349615 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:03.349626 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:03.349633 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:03.352153 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:03.352177 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:03.352186 1693723 round_trippers.go:580]     Audit-Id: 8e34752b-10d1-4eef-98c9-cf384bfdc1c5
	I0115 11:14:03.352192 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:03.352199 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:03.352206 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:03.352212 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:03.352219 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:03 GMT
	I0115 11:14:03.352338 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:03.849548 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:03.849570 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:03.849579 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:03.849587 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:03.852023 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:03.852050 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:03.852059 1693723 round_trippers.go:580]     Audit-Id: f5fd3abe-b2b6-419e-8e44-1885e54e26fb
	I0115 11:14:03.852065 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:03.852072 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:03.852078 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:03.852085 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:03.852096 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:03 GMT
	I0115 11:14:03.852303 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:03.852698 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:14:04.349020 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:04.349041 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:04.349051 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:04.349059 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:04.351905 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:04.351929 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:04.351938 1693723 round_trippers.go:580]     Audit-Id: ff035cc0-f823-4561-965d-d91bc5a8b95b
	I0115 11:14:04.351944 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:04.351951 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:04.351957 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:04.351964 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:04.351970 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:04 GMT
	I0115 11:14:04.352187 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:04.849295 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:04.849319 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:04.849328 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:04.849336 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:04.851633 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:04.851651 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:04.851660 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:04.851668 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:04.851681 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:04.851688 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:04 GMT
	I0115 11:14:04.851694 1693723 round_trippers.go:580]     Audit-Id: b2eadf59-0295-4261-8ed0-d07af16ac418
	I0115 11:14:04.851700 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:04.852145 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:05.349483 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:05.349508 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:05.349518 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:05.349525 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:05.352224 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:05.352250 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:05.352258 1693723 round_trippers.go:580]     Audit-Id: b180d79d-37ab-42b9-b10f-782b50114ef0
	I0115 11:14:05.352264 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:05.352271 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:05.352277 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:05.352283 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:05.352290 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:05 GMT
	I0115 11:14:05.352760 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:05.848815 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:05.848854 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:05.848864 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:05.848871 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:05.851285 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:05.851305 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:05.851314 1693723 round_trippers.go:580]     Audit-Id: 74817822-02ac-4f22-9c7a-844cfa218102
	I0115 11:14:05.851321 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:05.851327 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:05.851333 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:05.851339 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:05.851345 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:05 GMT
	I0115 11:14:05.851508 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:06.349258 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:06.349285 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:06.349295 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:06.349303 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:06.351702 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:06.351726 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:06.351735 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:06.351743 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:06 GMT
	I0115 11:14:06.351749 1693723 round_trippers.go:580]     Audit-Id: 6c04b3e9-0c6d-454d-87c9-402d3bc66632
	I0115 11:14:06.351755 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:06.351761 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:06.351772 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:06.352043 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:06.352427 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:14:06.849610 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:06.849632 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:06.849642 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:06.849649 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:06.852176 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:06.852200 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:06.852208 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:06 GMT
	I0115 11:14:06.852214 1693723 round_trippers.go:580]     Audit-Id: dc8ae82a-4adb-48bd-88cd-7f65b4abab9a
	I0115 11:14:06.852220 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:06.852226 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:06.852233 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:06.852243 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:06.852403 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:07.349433 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:07.349462 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:07.349473 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:07.349481 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:07.352068 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:07.352093 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:07.352104 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:07.352111 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:07.352117 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:07.352123 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:07.352129 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:07 GMT
	I0115 11:14:07.352135 1693723 round_trippers.go:580]     Audit-Id: a8ba43de-1d35-4ff6-8d1f-8a1a0d641985
	I0115 11:14:07.352476 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:07.849140 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:07.849166 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:07.849177 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:07.849184 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:07.851585 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:07.851610 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:07.851619 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:07 GMT
	I0115 11:14:07.851626 1693723 round_trippers.go:580]     Audit-Id: 9aeb2955-28c1-4517-aa0e-1d6c367347c6
	I0115 11:14:07.851633 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:07.851639 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:07.851645 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:07.851654 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:07.851981 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:08.349032 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:08.349058 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:08.349069 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:08.349077 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:08.351578 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:08.351610 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:08.351619 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:08.351626 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:08.351632 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:08 GMT
	I0115 11:14:08.351638 1693723 round_trippers.go:580]     Audit-Id: 3a8ff9f7-0890-43ee-a73b-74a6e09fe5cd
	I0115 11:14:08.351645 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:08.351655 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:08.352048 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:08.849036 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:08.849062 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:08.849073 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:08.849080 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:08.851624 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:08.851651 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:08.851659 1693723 round_trippers.go:580]     Audit-Id: 41e62704-18f9-4030-a2fe-c74036726486
	I0115 11:14:08.851666 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:08.851672 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:08.851678 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:08.851684 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:08.851691 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:08 GMT
	I0115 11:14:08.851792 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:08.852176 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:14:09.349775 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:09.349801 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:09.349812 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:09.349820 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:09.352209 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:09.352232 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:09.352241 1693723 round_trippers.go:580]     Audit-Id: a1249437-8f49-4029-8ff4-177d5834d1c7
	I0115 11:14:09.352247 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:09.352253 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:09.352260 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:09.352270 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:09.352276 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:09 GMT
	I0115 11:14:09.352654 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:09.849252 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:09.849278 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:09.849288 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:09.849296 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:09.851809 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:09.851829 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:09.851837 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:09.851844 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:09 GMT
	I0115 11:14:09.851851 1693723 round_trippers.go:580]     Audit-Id: 8f9b9d4e-99d3-4c5c-927a-7e6f0caa8daf
	I0115 11:14:09.851857 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:09.851863 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:09.851869 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:09.852027 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:10.349059 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:10.349080 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:10.349091 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:10.349098 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:10.351661 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:10.351683 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:10.351691 1693723 round_trippers.go:580]     Audit-Id: 5c757dfa-c283-4161-b48d-46faa2ddf9fd
	I0115 11:14:10.351697 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:10.351704 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:10.351710 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:10.351716 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:10.351723 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:10 GMT
	I0115 11:14:10.351892 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:10.849666 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:10.849694 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:10.849705 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:10.849712 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:10.852195 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:10.852218 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:10.852227 1693723 round_trippers.go:580]     Audit-Id: d5aeaee8-f77f-453f-a778-ca9a78f93b76
	I0115 11:14:10.852235 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:10.852242 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:10.852248 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:10.852254 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:10.852265 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:10 GMT
	I0115 11:14:10.852381 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:10.852757 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:14:11.349585 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:11.349609 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:11.349619 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:11.349627 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:11.352124 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:11.352143 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:11.352152 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:11.352159 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:11 GMT
	I0115 11:14:11.352165 1693723 round_trippers.go:580]     Audit-Id: 48d1ff36-dec4-4f90-941d-5f92456c0e8e
	I0115 11:14:11.352171 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:11.352177 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:11.352184 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:11.352307 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:11.848927 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:11.848951 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:11.848962 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:11.848969 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:11.851494 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:11.851518 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:11.851528 1693723 round_trippers.go:580]     Audit-Id: 5df3de8d-52dd-47ef-985f-cb2cb8af23ad
	I0115 11:14:11.851534 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:11.851540 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:11.851546 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:11.851556 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:11.851563 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:11 GMT
	I0115 11:14:11.851725 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:12.349113 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:12.349156 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:12.349166 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:12.349173 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:12.351450 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:12.351473 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:12.351480 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:12.351487 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:12 GMT
	I0115 11:14:12.351493 1693723 round_trippers.go:580]     Audit-Id: 49ede737-a6db-48a5-b8a6-1fd4d1e10b3c
	I0115 11:14:12.351499 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:12.351505 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:12.351512 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:12.351828 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:12.849857 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:12.849883 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:12.849892 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:12.849900 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:12.852440 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:12.852465 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:12.852474 1693723 round_trippers.go:580]     Audit-Id: 300f9576-f6da-4d05-b2c9-83f40fb72702
	I0115 11:14:12.852480 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:12.852487 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:12.852492 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:12.852499 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:12.852510 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:12 GMT
	I0115 11:14:12.852648 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:12.853073 1693723 node_ready.go:58] node "multinode-279658" has status "Ready":"False"
	I0115 11:14:13.348909 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:13.348930 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:13.348939 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:13.348947 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:13.351414 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:13.351438 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:13.351448 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:13.351454 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:13.351461 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:13.351468 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:13.351474 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:13 GMT
	I0115 11:14:13.351485 1693723 round_trippers.go:580]     Audit-Id: a2d46783-0393-4e19-b0d3-1d3532444091
	I0115 11:14:13.351770 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"333","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0115 11:14:13.848857 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:13.848877 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:13.848887 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:13.848894 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:13.851584 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:13.851608 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:13.851617 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:13.851624 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:13.851630 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:13.851637 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:13 GMT
	I0115 11:14:13.851646 1693723 round_trippers.go:580]     Audit-Id: 03d356ed-904b-419f-a325-2ebf5252045a
	I0115 11:14:13.851657 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:13.880235 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:14:13.880623 1693723 node_ready.go:49] node "multinode-279658" has status "Ready":"True"
	I0115 11:14:13.880636 1693723 node_ready.go:38] duration metric: took 31.53195491s waiting for node "multinode-279658" to be "Ready" ...
	I0115 11:14:13.880646 1693723 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:14:13.880724 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 11:14:13.880729 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:13.880738 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:13.880744 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:13.912646 1693723 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0115 11:14:13.912668 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:13.912676 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:13.912683 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:13.912689 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:13.912696 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:13 GMT
	I0115 11:14:13.912702 1693723 round_trippers.go:580]     Audit-Id: 857906a2-5829-42fe-8f03-93b6bd60c7c1
	I0115 11:14:13.912708 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:13.915590 1693723 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-5dd5756b68-jqj8x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0bb83c0f-1bf1-4ade-94f6-8e46770f3371","resourceVersion":"420","creationTimestamp":"2024-01-15T11:13:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3fe011c-57d0-4c2b-b5b4-50a12422361f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fe011c-57d0-4c2b-b5b4-50a12422361f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62677 chars]
	I0115 11:14:13.919936 1693723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jqj8x" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:13.920092 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jqj8x
	I0115 11:14:13.920117 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:13.920155 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:13.920179 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:13.925731 1693723 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0115 11:14:13.925751 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:13.925759 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:13.925766 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:13 GMT
	I0115 11:14:13.925772 1693723 round_trippers.go:580]     Audit-Id: f8702522-db2c-4bed-a9cf-4cc139c8a9f4
	I0115 11:14:13.925778 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:13.925790 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:13.925796 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:13.927040 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-jqj8x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0bb83c0f-1bf1-4ade-94f6-8e46770f3371","resourceVersion":"420","creationTimestamp":"2024-01-15T11:13:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3fe011c-57d0-4c2b-b5b4-50a12422361f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fe011c-57d0-4c2b-b5b4-50a12422361f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0115 11:14:13.927568 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:13.927578 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:13.927586 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:13.927593 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:13.930140 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:13.930156 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:13.930164 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:13.930171 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:13 GMT
	I0115 11:14:13.930177 1693723 round_trippers.go:580]     Audit-Id: 5d795db7-07e9-4961-bc15-264c48aa2541
	I0115 11:14:13.930183 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:13.930189 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:13.930195 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:13.930967 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:14:14.421120 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jqj8x
	I0115 11:14:14.421145 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.421155 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.421163 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.423998 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.424025 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.424068 1693723 round_trippers.go:580]     Audit-Id: 4dff309e-653f-4a92-a749-77a46a857475
	I0115 11:14:14.424078 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.424085 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.424091 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.424097 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.424104 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.424237 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-jqj8x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0bb83c0f-1bf1-4ade-94f6-8e46770f3371","resourceVersion":"437","creationTimestamp":"2024-01-15T11:13:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3fe011c-57d0-4c2b-b5b4-50a12422361f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fe011c-57d0-4c2b-b5b4-50a12422361f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0115 11:14:14.424855 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:14.424871 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.424882 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.424889 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.427484 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.427502 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.427510 1693723 round_trippers.go:580]     Audit-Id: 7717bbb1-856d-44ba-b860-6807727f3bda
	I0115 11:14:14.427516 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.427522 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.427528 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.427535 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.427542 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.427684 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:14:14.428052 1693723 pod_ready.go:92] pod "coredns-5dd5756b68-jqj8x" in "kube-system" namespace has status "Ready":"True"
	I0115 11:14:14.428064 1693723 pod_ready.go:81] duration metric: took 508.067574ms waiting for pod "coredns-5dd5756b68-jqj8x" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.428074 1693723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rmgns" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.428135 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rmgns
	I0115 11:14:14.428140 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.428147 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.428154 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.430409 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.430429 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.430437 1693723 round_trippers.go:580]     Audit-Id: 4d026681-eee3-4f28-a313-2b15f02c1d3d
	I0115 11:14:14.430443 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.430449 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.430455 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.430461 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.430468 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.430568 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rmgns","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20120dfd-708f-4d25-a64a-d790f55c3e56","resourceVersion":"441","creationTimestamp":"2024-01-15T11:13:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3fe011c-57d0-4c2b-b5b4-50a12422361f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fe011c-57d0-4c2b-b5b4-50a12422361f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0115 11:14:14.431081 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:14.431091 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.431099 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.431105 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.433352 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.433370 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.433378 1693723 round_trippers.go:580]     Audit-Id: 2b6f46e4-c9c1-479f-9de6-d52fe0577776
	I0115 11:14:14.433385 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.433391 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.433397 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.433406 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.433413 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.433541 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:14:14.433946 1693723 pod_ready.go:92] pod "coredns-5dd5756b68-rmgns" in "kube-system" namespace has status "Ready":"True"
	I0115 11:14:14.433965 1693723 pod_ready.go:81] duration metric: took 5.883297ms waiting for pod "coredns-5dd5756b68-rmgns" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.433976 1693723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.434056 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-279658
	I0115 11:14:14.434066 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.434074 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.434081 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.436416 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.436472 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.436493 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.436516 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.436552 1693723 round_trippers.go:580]     Audit-Id: 5b139d27-f281-4564-9065-c2c1993bd6e0
	I0115 11:14:14.436576 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.436589 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.436596 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.436719 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-279658","namespace":"kube-system","uid":"9aff8988-2d38-4d15-98cd-c3a9fa9bd280","resourceVersion":"325","creationTimestamp":"2024-01-15T11:13:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f80ca0c251f41900e39544fa906af512","kubernetes.io/config.mirror":"f80ca0c251f41900e39544fa906af512","kubernetes.io/config.seen":"2024-01-15T11:13:29.009107303Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0115 11:14:14.437206 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:14.437223 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.437231 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.437239 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.439515 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.439537 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.439546 1693723 round_trippers.go:580]     Audit-Id: 243e1597-118a-4647-b0a6-2b489f91e631
	I0115 11:14:14.439552 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.439558 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.439564 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.439574 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.439587 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.439741 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:14:14.440127 1693723 pod_ready.go:92] pod "etcd-multinode-279658" in "kube-system" namespace has status "Ready":"True"
	I0115 11:14:14.440146 1693723 pod_ready.go:81] duration metric: took 6.160205ms waiting for pod "etcd-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.440159 1693723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.440221 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-279658
	I0115 11:14:14.440231 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.440238 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.440246 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.442607 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.442627 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.442635 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.442642 1693723 round_trippers.go:580]     Audit-Id: 29352cbb-c2ba-4fec-b98b-aff04033f3e2
	I0115 11:14:14.442652 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.442663 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.442672 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.442678 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.442795 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-279658","namespace":"kube-system","uid":"693a03b4-3bdf-4de1-87cb-f4b6b524a7cf","resourceVersion":"321","creationTimestamp":"2024-01-15T11:13:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e792308696bb4be3fddd132c9ec0f17b","kubernetes.io/config.mirror":"e792308696bb4be3fddd132c9ec0f17b","kubernetes.io/config.seen":"2024-01-15T11:13:29.009099369Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0115 11:14:14.443298 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:14.443315 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.443323 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.443330 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.445365 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.445382 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.445390 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.445396 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.445402 1693723 round_trippers.go:580]     Audit-Id: d5aba186-a5d0-4f64-8a41-384580c549e5
	I0115 11:14:14.445408 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.445414 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.445420 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.445843 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:14:14.446364 1693723 pod_ready.go:92] pod "kube-apiserver-multinode-279658" in "kube-system" namespace has status "Ready":"True"
	I0115 11:14:14.446386 1693723 pod_ready.go:81] duration metric: took 6.217082ms waiting for pod "kube-apiserver-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.446410 1693723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.449674 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-279658
	I0115 11:14:14.449690 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.449699 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.449707 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.452176 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.452208 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.452216 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.452223 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.452244 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.452255 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.452262 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.452294 1693723 round_trippers.go:580]     Audit-Id: 684c2003-9116-4e6d-9972-ee21f05a8e10
	I0115 11:14:14.452446 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-279658","namespace":"kube-system","uid":"60d65709-5636-408d-8e80-491f1a4dfa1b","resourceVersion":"319","creationTimestamp":"2024-01-15T11:13:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"daeb02f972339046c9bf6a96a2b71156","kubernetes.io/config.mirror":"daeb02f972339046c9bf6a96a2b71156","kubernetes.io/config.seen":"2024-01-15T11:13:21.435611758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0115 11:14:14.649347 1693723 request.go:629] Waited for 196.330862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:14.649413 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:14.649419 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.649428 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.649438 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.652004 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.652069 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.652086 1693723 round_trippers.go:580]     Audit-Id: 2d079147-1640-47c7-b06f-56af44f311a5
	I0115 11:14:14.652093 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.652102 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.652110 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.652119 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.652139 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.652321 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:14:14.652719 1693723 pod_ready.go:92] pod "kube-controller-manager-multinode-279658" in "kube-system" namespace has status "Ready":"True"
	I0115 11:14:14.652739 1693723 pod_ready.go:81] duration metric: took 206.315817ms waiting for pod "kube-controller-manager-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.652751 1693723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tdtxr" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:14.849037 1693723 request.go:629] Waited for 196.219809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdtxr
	I0115 11:14:14.849099 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdtxr
	I0115 11:14:14.849110 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:14.849120 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:14.849131 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:14.851724 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:14.851788 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:14.851811 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:14 GMT
	I0115 11:14:14.851832 1693723 round_trippers.go:580]     Audit-Id: 0cf931ea-f045-4696-a590-b0611eaad9cb
	I0115 11:14:14.851860 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:14.851869 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:14.851875 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:14.851894 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:14.852042 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tdtxr","generateName":"kube-proxy-","namespace":"kube-system","uid":"fd50a58b-d9c8-42ae-8a1a-d4716cedb568","resourceVersion":"391","creationTimestamp":"2024-01-15T11:13:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a9ab3f8d-ea08-4f6a-92bb-976b14e41e6f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9ab3f8d-ea08-4f6a-92bb-976b14e41e6f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0115 11:14:15.049863 1693723 request.go:629] Waited for 197.339157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:15.049930 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:15.049938 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:15.049955 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:15.049963 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:15.053197 1693723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 11:14:15.053295 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:15.053313 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:15 GMT
	I0115 11:14:15.053321 1693723 round_trippers.go:580]     Audit-Id: 2185be1b-da38-42c5-86b7-533f469df79c
	I0115 11:14:15.053328 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:15.053334 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:15.053341 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:15.053347 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:15.053462 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:14:15.053858 1693723 pod_ready.go:92] pod "kube-proxy-tdtxr" in "kube-system" namespace has status "Ready":"True"
	I0115 11:14:15.053878 1693723 pod_ready.go:81] duration metric: took 401.117109ms waiting for pod "kube-proxy-tdtxr" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:15.053891 1693723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:15.249654 1693723 request.go:629] Waited for 195.694481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-279658
	I0115 11:14:15.249718 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-279658
	I0115 11:14:15.249728 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:15.249738 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:15.249749 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:15.253106 1693723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 11:14:15.253143 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:15.253152 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:15.253158 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:15.253164 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:15.253170 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:15.253176 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:15 GMT
	I0115 11:14:15.253183 1693723 round_trippers.go:580]     Audit-Id: 9621b339-8ccc-4ff3-ac9c-74927a29eb61
	I0115 11:14:15.253438 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-279658","namespace":"kube-system","uid":"bfd1e34e-c84e-4102-84a5-c1c5e50447d4","resourceVersion":"320","creationTimestamp":"2024-01-15T11:13:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0a1ae7a442119331fb27f0b43446d749","kubernetes.io/config.mirror":"0a1ae7a442119331fb27f0b43446d749","kubernetes.io/config.seen":"2024-01-15T11:13:21.435601494Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0115 11:14:15.449191 1693723 request.go:629] Waited for 195.272969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:15.449254 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:14:15.449259 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:15.449269 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:15.449277 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:15.451785 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:15.451843 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:15.451866 1693723 round_trippers.go:580]     Audit-Id: 72466587-7e73-4723-9f41-295119e708e5
	I0115 11:14:15.451887 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:15.451921 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:15.451944 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:15.451965 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:15.451985 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:15 GMT
	I0115 11:14:15.452097 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:14:15.452493 1693723 pod_ready.go:92] pod "kube-scheduler-multinode-279658" in "kube-system" namespace has status "Ready":"True"
	I0115 11:14:15.452512 1693723 pod_ready.go:81] duration metric: took 398.61058ms waiting for pod "kube-scheduler-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:14:15.452526 1693723 pod_ready.go:38] duration metric: took 1.571858806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:14:15.452544 1693723 api_server.go:52] waiting for apiserver process to appear ...
	I0115 11:14:15.452606 1693723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 11:14:15.464157 1693723 command_runner.go:130] > 1261
	I0115 11:14:15.465490 1693723 api_server.go:72] duration metric: took 33.224599139s to wait for apiserver process to appear ...
	I0115 11:14:15.465512 1693723 api_server.go:88] waiting for apiserver healthz status ...
	I0115 11:14:15.465530 1693723 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0115 11:14:15.475000 1693723 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0115 11:14:15.475072 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0115 11:14:15.475084 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:15.475094 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:15.475101 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:15.476227 1693723 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0115 11:14:15.476246 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:15.476254 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:15.476261 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:15.476267 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:15.476276 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:15.476284 1693723 round_trippers.go:580]     Content-Length: 264
	I0115 11:14:15.476290 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:15 GMT
	I0115 11:14:15.476297 1693723 round_trippers.go:580]     Audit-Id: 614d0078-b633-4d92-b452-a3f145d64387
	I0115 11:14:15.476315 1693723 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0115 11:14:15.476395 1693723 api_server.go:141] control plane version: v1.28.4
	I0115 11:14:15.476409 1693723 api_server.go:131] duration metric: took 10.89072ms to wait for apiserver health ...
	I0115 11:14:15.476416 1693723 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 11:14:15.649791 1693723 request.go:629] Waited for 173.315501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 11:14:15.649894 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 11:14:15.649922 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:15.649937 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:15.649945 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:15.653826 1693723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 11:14:15.653848 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:15.653857 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:15.653875 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:15.653882 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:15.653889 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:15.653896 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:15 GMT
	I0115 11:14:15.653902 1693723 round_trippers.go:580]     Audit-Id: 3141ee58-dbf7-4c2f-96fa-5eca088a90c0
	I0115 11:14:15.654694 1693723 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5dd5756b68-jqj8x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0bb83c0f-1bf1-4ade-94f6-8e46770f3371","resourceVersion":"437","creationTimestamp":"2024-01-15T11:13:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3fe011c-57d0-4c2b-b5b4-50a12422361f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fe011c-57d0-4c2b-b5b4-50a12422361f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62868 chars]
	I0115 11:14:15.657306 1693723 system_pods.go:59] 9 kube-system pods found
	I0115 11:14:15.657338 1693723 system_pods.go:61] "coredns-5dd5756b68-jqj8x" [0bb83c0f-1bf1-4ade-94f6-8e46770f3371] Running
	I0115 11:14:15.657347 1693723 system_pods.go:61] "coredns-5dd5756b68-rmgns" [20120dfd-708f-4d25-a64a-d790f55c3e56] Running
	I0115 11:14:15.657362 1693723 system_pods.go:61] "etcd-multinode-279658" [9aff8988-2d38-4d15-98cd-c3a9fa9bd280] Running
	I0115 11:14:15.657374 1693723 system_pods.go:61] "kindnet-ngs6h" [f169abe5-8939-4bcc-ab7d-b4bbe74029e0] Running
	I0115 11:14:15.657380 1693723 system_pods.go:61] "kube-apiserver-multinode-279658" [693a03b4-3bdf-4de1-87cb-f4b6b524a7cf] Running
	I0115 11:14:15.657386 1693723 system_pods.go:61] "kube-controller-manager-multinode-279658" [60d65709-5636-408d-8e80-491f1a4dfa1b] Running
	I0115 11:14:15.657394 1693723 system_pods.go:61] "kube-proxy-tdtxr" [fd50a58b-d9c8-42ae-8a1a-d4716cedb568] Running
	I0115 11:14:15.657399 1693723 system_pods.go:61] "kube-scheduler-multinode-279658" [bfd1e34e-c84e-4102-84a5-c1c5e50447d4] Running
	I0115 11:14:15.657404 1693723 system_pods.go:61] "storage-provisioner" [734e2efb-4fca-4aec-ba3d-882668c1ced5] Running
	I0115 11:14:15.657410 1693723 system_pods.go:74] duration metric: took 180.989285ms to wait for pod list to return data ...
	I0115 11:14:15.657421 1693723 default_sa.go:34] waiting for default service account to be created ...
	I0115 11:14:15.849826 1693723 request.go:629] Waited for 192.313889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0115 11:14:15.849893 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0115 11:14:15.849903 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:15.849912 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:15.849922 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:15.852445 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:15.852473 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:15.852484 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:15.852491 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:15.852498 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:15.852525 1693723 round_trippers.go:580]     Content-Length: 261
	I0115 11:14:15.852532 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:15 GMT
	I0115 11:14:15.852542 1693723 round_trippers.go:580]     Audit-Id: ec21c9b1-312f-4b3b-8c7b-75786c61a601
	I0115 11:14:15.852549 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:15.852599 1693723 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de3603fc-b50f-4a73-ab93-0a5aa22f4317","resourceVersion":"335","creationTimestamp":"2024-01-15T11:13:41Z"}}]}
	I0115 11:14:15.852818 1693723 default_sa.go:45] found service account: "default"
	I0115 11:14:15.852839 1693723 default_sa.go:55] duration metric: took 195.411312ms for default service account to be created ...
	I0115 11:14:15.852854 1693723 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 11:14:16.049280 1693723 request.go:629] Waited for 196.36278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 11:14:16.049427 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 11:14:16.049441 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:16.049451 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:16.049469 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:16.053270 1693723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 11:14:16.053347 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:16.053369 1693723 round_trippers.go:580]     Audit-Id: c5d90d9b-60af-457a-b797-deea9fe086df
	I0115 11:14:16.053392 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:16.053425 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:16.053451 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:16.053471 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:16.053501 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:16 GMT
	I0115 11:14:16.054126 1693723 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5dd5756b68-jqj8x","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0bb83c0f-1bf1-4ade-94f6-8e46770f3371","resourceVersion":"437","creationTimestamp":"2024-01-15T11:13:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3fe011c-57d0-4c2b-b5b4-50a12422361f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fe011c-57d0-4c2b-b5b4-50a12422361f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62868 chars]
	I0115 11:14:16.056737 1693723 system_pods.go:86] 9 kube-system pods found
	I0115 11:14:16.056768 1693723 system_pods.go:89] "coredns-5dd5756b68-jqj8x" [0bb83c0f-1bf1-4ade-94f6-8e46770f3371] Running
	I0115 11:14:16.056776 1693723 system_pods.go:89] "coredns-5dd5756b68-rmgns" [20120dfd-708f-4d25-a64a-d790f55c3e56] Running
	I0115 11:14:16.056782 1693723 system_pods.go:89] "etcd-multinode-279658" [9aff8988-2d38-4d15-98cd-c3a9fa9bd280] Running
	I0115 11:14:16.056787 1693723 system_pods.go:89] "kindnet-ngs6h" [f169abe5-8939-4bcc-ab7d-b4bbe74029e0] Running
	I0115 11:14:16.056792 1693723 system_pods.go:89] "kube-apiserver-multinode-279658" [693a03b4-3bdf-4de1-87cb-f4b6b524a7cf] Running
	I0115 11:14:16.056798 1693723 system_pods.go:89] "kube-controller-manager-multinode-279658" [60d65709-5636-408d-8e80-491f1a4dfa1b] Running
	I0115 11:14:16.056802 1693723 system_pods.go:89] "kube-proxy-tdtxr" [fd50a58b-d9c8-42ae-8a1a-d4716cedb568] Running
	I0115 11:14:16.056808 1693723 system_pods.go:89] "kube-scheduler-multinode-279658" [bfd1e34e-c84e-4102-84a5-c1c5e50447d4] Running
	I0115 11:14:16.056817 1693723 system_pods.go:89] "storage-provisioner" [734e2efb-4fca-4aec-ba3d-882668c1ced5] Running
	I0115 11:14:16.056825 1693723 system_pods.go:126] duration metric: took 203.963334ms to wait for k8s-apps to be running ...
	I0115 11:14:16.056838 1693723 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 11:14:16.056904 1693723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:14:16.071358 1693723 system_svc.go:56] duration metric: took 14.509073ms WaitForService to wait for kubelet.
	I0115 11:14:16.071423 1693723 kubeadm.go:581] duration metric: took 33.83053634s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 11:14:16.071450 1693723 node_conditions.go:102] verifying NodePressure condition ...
	I0115 11:14:16.249844 1693723 request.go:629] Waited for 178.324884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0115 11:14:16.249921 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0115 11:14:16.249930 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:16.249940 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:16.249950 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:16.252577 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:16.252601 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:16.252610 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:16.252617 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:16.252626 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:16 GMT
	I0115 11:14:16.252639 1693723 round_trippers.go:580]     Audit-Id: ac8bd651-368e-4670-a8ea-b75930ce2455
	I0115 11:14:16.252646 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:16.252652 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:16.252910 1693723 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0115 11:14:16.253395 1693723 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0115 11:14:16.253421 1693723 node_conditions.go:123] node cpu capacity is 2
	I0115 11:14:16.253433 1693723 node_conditions.go:105] duration metric: took 181.977887ms to run NodePressure ...
	I0115 11:14:16.253444 1693723 start.go:228] waiting for startup goroutines ...
	I0115 11:14:16.253451 1693723 start.go:233] waiting for cluster config update ...
	I0115 11:14:16.253469 1693723 start.go:242] writing updated cluster config ...
	I0115 11:14:16.256500 1693723 out.go:177] 
	I0115 11:14:16.258902 1693723 config.go:182] Loaded profile config "multinode-279658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:14:16.258991 1693723 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/config.json ...
	I0115 11:14:16.261532 1693723 out.go:177] * Starting worker node multinode-279658-m02 in cluster multinode-279658
	I0115 11:14:16.264150 1693723 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 11:14:16.266030 1693723 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 11:14:16.267737 1693723 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 11:14:16.267769 1693723 cache.go:56] Caching tarball of preloaded images
	I0115 11:14:16.267812 1693723 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 11:14:16.267884 1693723 preload.go:174] Found /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0115 11:14:16.267901 1693723 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 11:14:16.268022 1693723 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/config.json ...
	I0115 11:14:16.284952 1693723 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 11:14:16.284974 1693723 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 11:14:16.284997 1693723 cache.go:194] Successfully downloaded all kic artifacts
	I0115 11:14:16.285031 1693723 start.go:365] acquiring machines lock for multinode-279658-m02: {Name:mk470ac652d00ff2554fe65ed8790bffd8e88d45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 11:14:16.285153 1693723 start.go:369] acquired machines lock for "multinode-279658-m02" in 104.187µs
	I0115 11:14:16.285178 1693723 start.go:93] Provisioning new machine with config: &{Name:multinode-279658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-279658 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 11:14:16.285261 1693723 start.go:125] createHost starting for "m02" (driver="docker")
	I0115 11:14:16.287627 1693723 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0115 11:14:16.287738 1693723 start.go:159] libmachine.API.Create for "multinode-279658" (driver="docker")
	I0115 11:14:16.287763 1693723 client.go:168] LocalClient.Create starting
	I0115 11:14:16.287822 1693723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem
	I0115 11:14:16.287863 1693723 main.go:141] libmachine: Decoding PEM data...
	I0115 11:14:16.287881 1693723 main.go:141] libmachine: Parsing certificate...
	I0115 11:14:16.287989 1693723 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem
	I0115 11:14:16.288018 1693723 main.go:141] libmachine: Decoding PEM data...
	I0115 11:14:16.288037 1693723 main.go:141] libmachine: Parsing certificate...
	I0115 11:14:16.288270 1693723 cli_runner.go:164] Run: docker network inspect multinode-279658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 11:14:16.305802 1693723 network_create.go:77] Found existing network {name:multinode-279658 subnet:0x40036f0f00 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0115 11:14:16.305847 1693723 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-279658-m02" container
	I0115 11:14:16.305931 1693723 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 11:14:16.328182 1693723 cli_runner.go:164] Run: docker volume create multinode-279658-m02 --label name.minikube.sigs.k8s.io=multinode-279658-m02 --label created_by.minikube.sigs.k8s.io=true
	I0115 11:14:16.346332 1693723 oci.go:103] Successfully created a docker volume multinode-279658-m02
	I0115 11:14:16.346440 1693723 cli_runner.go:164] Run: docker run --rm --name multinode-279658-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-279658-m02 --entrypoint /usr/bin/test -v multinode-279658-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 11:14:16.873529 1693723 oci.go:107] Successfully prepared a docker volume multinode-279658-m02
	I0115 11:14:16.873569 1693723 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 11:14:16.873592 1693723 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 11:14:16.873688 1693723 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-279658-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 11:14:21.151946 1693723 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-279658-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.278210623s)
	I0115 11:14:21.151979 1693723 kic.go:203] duration metric: took 4.278385 seconds to extract preloaded images to volume
	W0115 11:14:21.152120 1693723 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 11:14:21.152231 1693723 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 11:14:21.225429 1693723 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-279658-m02 --name multinode-279658-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-279658-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-279658-m02 --network multinode-279658 --ip 192.168.58.3 --volume multinode-279658-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 11:14:21.575148 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658-m02 --format={{.State.Running}}
	I0115 11:14:21.609169 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658-m02 --format={{.State.Status}}
	I0115 11:14:21.636006 1693723 cli_runner.go:164] Run: docker exec multinode-279658-m02 stat /var/lib/dpkg/alternatives/iptables
	I0115 11:14:21.699472 1693723 oci.go:144] the created container "multinode-279658-m02" has a running status.
	I0115 11:14:21.699503 1693723 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658-m02/id_rsa...
	I0115 11:14:22.894475 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0115 11:14:22.894526 1693723 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 11:14:22.918963 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658-m02 --format={{.State.Status}}
	I0115 11:14:22.941813 1693723 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 11:14:22.941835 1693723 kic_runner.go:114] Args: [docker exec --privileged multinode-279658-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 11:14:23.027933 1693723 cli_runner.go:164] Run: docker container inspect multinode-279658-m02 --format={{.State.Status}}
	I0115 11:14:23.052531 1693723 machine.go:88] provisioning docker machine ...
	I0115 11:14:23.052567 1693723 ubuntu.go:169] provisioning hostname "multinode-279658-m02"
	I0115 11:14:23.052636 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658-m02
	I0115 11:14:23.073022 1693723 main.go:141] libmachine: Using SSH client type: native
	I0115 11:14:23.073447 1693723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34799 <nil> <nil>}
	I0115 11:14:23.073500 1693723 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-279658-m02 && echo "multinode-279658-m02" | sudo tee /etc/hostname
	I0115 11:14:23.225882 1693723 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-279658-m02
	
	I0115 11:14:23.226048 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658-m02
	I0115 11:14:23.244462 1693723 main.go:141] libmachine: Using SSH client type: native
	I0115 11:14:23.244868 1693723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34799 <nil> <nil>}
	I0115 11:14:23.244887 1693723 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-279658-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-279658-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-279658-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 11:14:23.383748 1693723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 11:14:23.383778 1693723 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17953-1625104/.minikube CaCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17953-1625104/.minikube}
	I0115 11:14:23.383795 1693723 ubuntu.go:177] setting up certificates
	I0115 11:14:23.383805 1693723 provision.go:83] configureAuth start
	I0115 11:14:23.383867 1693723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279658-m02
	I0115 11:14:23.401462 1693723 provision.go:138] copyHostCerts
	I0115 11:14:23.401504 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem
	I0115 11:14:23.401535 1693723 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem, removing ...
	I0115 11:14:23.401546 1693723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem
	I0115 11:14:23.401618 1693723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.pem (1082 bytes)
	I0115 11:14:23.401698 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem
	I0115 11:14:23.401718 1693723 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem, removing ...
	I0115 11:14:23.401723 1693723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem
	I0115 11:14:23.401747 1693723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/cert.pem (1123 bytes)
	I0115 11:14:23.401834 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem
	I0115 11:14:23.401860 1693723 exec_runner.go:144] found /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem, removing ...
	I0115 11:14:23.401868 1693723 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem
	I0115 11:14:23.401903 1693723 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17953-1625104/.minikube/key.pem (1675 bytes)
	I0115 11:14:23.401954 1693723 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem org=jenkins.multinode-279658-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-279658-m02]
	I0115 11:14:24.273491 1693723 provision.go:172] copyRemoteCerts
	I0115 11:14:24.273558 1693723 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 11:14:24.273604 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658-m02
	I0115 11:14:24.291130 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34799 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658-m02/id_rsa Username:docker}
	I0115 11:14:24.389251 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 11:14:24.389309 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 11:14:24.417317 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 11:14:24.417379 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0115 11:14:24.446196 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 11:14:24.446259 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 11:14:24.475979 1693723 provision.go:86] duration metric: configureAuth took 1.092160657s
	I0115 11:14:24.476005 1693723 ubuntu.go:193] setting minikube options for container-runtime
	I0115 11:14:24.476191 1693723 config.go:182] Loaded profile config "multinode-279658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:14:24.476302 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658-m02
	I0115 11:14:24.493852 1693723 main.go:141] libmachine: Using SSH client type: native
	I0115 11:14:24.494273 1693723 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfbd0] 0x3c2340 <nil>  [] 0s} 127.0.0.1 34799 <nil> <nil>}
	I0115 11:14:24.494318 1693723 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0115 11:14:24.755315 1693723 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0115 11:14:24.755342 1693723 machine.go:91] provisioned docker machine in 1.702786133s
	I0115 11:14:24.755353 1693723 client.go:171] LocalClient.Create took 8.467583659s
	I0115 11:14:24.755388 1693723 start.go:167] duration metric: libmachine.API.Create for "multinode-279658" took 8.467648649s
	I0115 11:14:24.755402 1693723 start.go:300] post-start starting for "multinode-279658-m02" (driver="docker")
	I0115 11:14:24.755413 1693723 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 11:14:24.755496 1693723 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 11:14:24.755562 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658-m02
	I0115 11:14:24.776211 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34799 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658-m02/id_rsa Username:docker}
	I0115 11:14:24.877187 1693723 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 11:14:24.881093 1693723 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0115 11:14:24.881110 1693723 command_runner.go:130] > NAME="Ubuntu"
	I0115 11:14:24.881118 1693723 command_runner.go:130] > VERSION_ID="22.04"
	I0115 11:14:24.881124 1693723 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0115 11:14:24.881130 1693723 command_runner.go:130] > VERSION_CODENAME=jammy
	I0115 11:14:24.881135 1693723 command_runner.go:130] > ID=ubuntu
	I0115 11:14:24.881139 1693723 command_runner.go:130] > ID_LIKE=debian
	I0115 11:14:24.881145 1693723 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0115 11:14:24.881152 1693723 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0115 11:14:24.881159 1693723 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0115 11:14:24.881170 1693723 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0115 11:14:24.881175 1693723 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0115 11:14:24.881274 1693723 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 11:14:24.881309 1693723 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 11:14:24.881320 1693723 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 11:14:24.881330 1693723 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 11:14:24.881341 1693723 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-1625104/.minikube/addons for local assets ...
	I0115 11:14:24.881400 1693723 filesync.go:126] Scanning /home/jenkins/minikube-integration/17953-1625104/.minikube/files for local assets ...
	I0115 11:14:24.881480 1693723 filesync.go:149] local asset: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem -> 16304352.pem in /etc/ssl/certs
	I0115 11:14:24.881490 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem -> /etc/ssl/certs/16304352.pem
	I0115 11:14:24.881616 1693723 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 11:14:24.891865 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem --> /etc/ssl/certs/16304352.pem (1708 bytes)
	I0115 11:14:24.920759 1693723 start.go:303] post-start completed in 165.342459ms
	I0115 11:14:24.921115 1693723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279658-m02
	I0115 11:14:24.939407 1693723 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/config.json ...
	I0115 11:14:24.939681 1693723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:14:24.939732 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658-m02
	I0115 11:14:24.962189 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34799 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658-m02/id_rsa Username:docker}
	I0115 11:14:25.072797 1693723 command_runner.go:130] > 12%!
	(MISSING)I0115 11:14:25.072889 1693723 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 11:14:25.078810 1693723 command_runner.go:130] > 171G
	I0115 11:14:25.079082 1693723 start.go:128] duration metric: createHost completed in 8.793809571s
	I0115 11:14:25.079100 1693723 start.go:83] releasing machines lock for "multinode-279658-m02", held for 8.793938536s
	I0115 11:14:25.079180 1693723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279658-m02
	I0115 11:14:25.100999 1693723 out.go:177] * Found network options:
	I0115 11:14:25.102736 1693723 out.go:177]   - NO_PROXY=192.168.58.2
	W0115 11:14:25.104483 1693723 proxy.go:119] fail to check proxy env: Error ip not in block
	W0115 11:14:25.104533 1693723 proxy.go:119] fail to check proxy env: Error ip not in block
	I0115 11:14:25.104613 1693723 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0115 11:14:25.104661 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658-m02
	I0115 11:14:25.105010 1693723 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 11:14:25.105073 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658-m02
	I0115 11:14:25.127244 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34799 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658-m02/id_rsa Username:docker}
	I0115 11:14:25.128859 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34799 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658-m02/id_rsa Username:docker}
	I0115 11:14:25.380360 1693723 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0115 11:14:25.393866 1693723 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 11:14:25.399306 1693723 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0115 11:14:25.399380 1693723 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0115 11:14:25.399403 1693723 command_runner.go:130] > Device: c2h/194d	Inode: 1823271     Links: 1
	I0115 11:14:25.399431 1693723 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 11:14:25.399455 1693723 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0115 11:14:25.399476 1693723 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0115 11:14:25.399496 1693723 command_runner.go:130] > Change: 2024-01-15 10:51:10.451580077 +0000
	I0115 11:14:25.399526 1693723 command_runner.go:130] >  Birth: 2024-01-15 10:51:10.451580077 +0000
	I0115 11:14:25.399931 1693723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 11:14:25.426472 1693723 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0115 11:14:25.426600 1693723 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 11:14:25.466756 1693723 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0115 11:14:25.466781 1693723 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 11:14:25.466788 1693723 start.go:475] detecting cgroup driver to use...
	I0115 11:14:25.466820 1693723 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 11:14:25.466869 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0115 11:14:25.486487 1693723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0115 11:14:25.501083 1693723 docker.go:217] disabling cri-docker service (if available) ...
	I0115 11:14:25.501154 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 11:14:25.518004 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 11:14:25.535370 1693723 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 11:14:25.649023 1693723 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 11:14:25.756880 1693723 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0115 11:14:25.756908 1693723 docker.go:233] disabling docker service ...
	I0115 11:14:25.756961 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 11:14:25.780155 1693723 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 11:14:25.794413 1693723 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 11:14:25.903571 1693723 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0115 11:14:25.903647 1693723 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 11:14:26.012695 1693723 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0115 11:14:26.012771 1693723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 11:14:26.029220 1693723 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 11:14:26.051729 1693723 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0115 11:14:26.053441 1693723 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0115 11:14:26.053575 1693723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:14:26.066773 1693723 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0115 11:14:26.066866 1693723 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:14:26.079262 1693723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:14:26.091691 1693723 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0115 11:14:26.104295 1693723 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 11:14:26.115772 1693723 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 11:14:26.126373 1693723 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0115 11:14:26.126481 1693723 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 11:14:26.137708 1693723 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 11:14:26.247705 1693723 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0115 11:14:26.397212 1693723 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0115 11:14:26.397337 1693723 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0115 11:14:26.402805 1693723 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0115 11:14:26.402867 1693723 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0115 11:14:26.402895 1693723 command_runner.go:130] > Device: cbh/203d	Inode: 186         Links: 1
	I0115 11:14:26.402919 1693723 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 11:14:26.402951 1693723 command_runner.go:130] > Access: 2024-01-15 11:14:26.380402400 +0000
	I0115 11:14:26.402987 1693723 command_runner.go:130] > Modify: 2024-01-15 11:14:26.380402400 +0000
	I0115 11:14:26.403008 1693723 command_runner.go:130] > Change: 2024-01-15 11:14:26.380402400 +0000
	I0115 11:14:26.403030 1693723 command_runner.go:130] >  Birth: -
	I0115 11:14:26.403297 1693723 start.go:543] Will wait 60s for crictl version
	I0115 11:14:26.403379 1693723 ssh_runner.go:195] Run: which crictl
	I0115 11:14:26.407601 1693723 command_runner.go:130] > /usr/bin/crictl
	I0115 11:14:26.408047 1693723 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 11:14:26.450940 1693723 command_runner.go:130] > Version:  0.1.0
	I0115 11:14:26.451011 1693723 command_runner.go:130] > RuntimeName:  cri-o
	I0115 11:14:26.451031 1693723 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0115 11:14:26.451053 1693723 command_runner.go:130] > RuntimeApiVersion:  v1
	I0115 11:14:26.453776 1693723 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0115 11:14:26.453954 1693723 ssh_runner.go:195] Run: crio --version
	I0115 11:14:26.500792 1693723 command_runner.go:130] > crio version 1.24.6
	I0115 11:14:26.500866 1693723 command_runner.go:130] > Version:          1.24.6
	I0115 11:14:26.500890 1693723 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0115 11:14:26.500912 1693723 command_runner.go:130] > GitTreeState:     clean
	I0115 11:14:26.500943 1693723 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0115 11:14:26.500968 1693723 command_runner.go:130] > GoVersion:        go1.18.2
	I0115 11:14:26.500990 1693723 command_runner.go:130] > Compiler:         gc
	I0115 11:14:26.501012 1693723 command_runner.go:130] > Platform:         linux/arm64
	I0115 11:14:26.501051 1693723 command_runner.go:130] > Linkmode:         dynamic
	I0115 11:14:26.501079 1693723 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 11:14:26.501098 1693723 command_runner.go:130] > SeccompEnabled:   true
	I0115 11:14:26.501118 1693723 command_runner.go:130] > AppArmorEnabled:  false
	I0115 11:14:26.502868 1693723 ssh_runner.go:195] Run: crio --version
	I0115 11:14:26.550138 1693723 command_runner.go:130] > crio version 1.24.6
	I0115 11:14:26.550221 1693723 command_runner.go:130] > Version:          1.24.6
	I0115 11:14:26.550254 1693723 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0115 11:14:26.550306 1693723 command_runner.go:130] > GitTreeState:     clean
	I0115 11:14:26.550332 1693723 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0115 11:14:26.550352 1693723 command_runner.go:130] > GoVersion:        go1.18.2
	I0115 11:14:26.550387 1693723 command_runner.go:130] > Compiler:         gc
	I0115 11:14:26.550410 1693723 command_runner.go:130] > Platform:         linux/arm64
	I0115 11:14:26.550431 1693723 command_runner.go:130] > Linkmode:         dynamic
	I0115 11:14:26.550470 1693723 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0115 11:14:26.550494 1693723 command_runner.go:130] > SeccompEnabled:   true
	I0115 11:14:26.550514 1693723 command_runner.go:130] > AppArmorEnabled:  false
	I0115 11:14:26.555569 1693723 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0115 11:14:26.557388 1693723 out.go:177]   - env NO_PROXY=192.168.58.2
	I0115 11:14:26.559308 1693723 cli_runner.go:164] Run: docker network inspect multinode-279658 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 11:14:26.576910 1693723 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0115 11:14:26.586570 1693723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:14:26.601017 1693723 certs.go:56] Setting up /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658 for IP: 192.168.58.3
	I0115 11:14:26.601050 1693723 certs.go:190] acquiring lock for shared ca certs: {Name:mk2a63925baba8534769a012921a3873667cd449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 11:14:26.601185 1693723 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key
	I0115 11:14:26.601222 1693723 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key
	I0115 11:14:26.601232 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 11:14:26.601246 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 11:14:26.601263 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 11:14:26.601277 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 11:14:26.601330 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435.pem (1338 bytes)
	W0115 11:14:26.601360 1693723 certs.go:433] ignoring /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435_empty.pem, impossibly tiny 0 bytes
	I0115 11:14:26.601369 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca-key.pem (1679 bytes)
	I0115 11:14:26.601395 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/ca.pem (1082 bytes)
	I0115 11:14:26.601417 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/cert.pem (1123 bytes)
	I0115 11:14:26.601441 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/certs/key.pem (1675 bytes)
	I0115 11:14:26.601488 1693723 certs.go:437] found cert: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem (1708 bytes)
	I0115 11:14:26.601514 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem -> /usr/share/ca-certificates/16304352.pem
	I0115 11:14:26.601526 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:14:26.601536 1693723 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435.pem -> /usr/share/ca-certificates/1630435.pem
	I0115 11:14:26.601893 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 11:14:26.631475 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0115 11:14:26.660318 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 11:14:26.689959 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0115 11:14:26.718832 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/ssl/certs/16304352.pem --> /usr/share/ca-certificates/16304352.pem (1708 bytes)
	I0115 11:14:26.748046 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 11:14:26.776790 1693723 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17953-1625104/.minikube/certs/1630435.pem --> /usr/share/ca-certificates/1630435.pem (1338 bytes)
	I0115 11:14:26.806206 1693723 ssh_runner.go:195] Run: openssl version
	I0115 11:14:26.813040 1693723 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0115 11:14:26.813480 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16304352.pem && ln -fs /usr/share/ca-certificates/16304352.pem /etc/ssl/certs/16304352.pem"
	I0115 11:14:26.825291 1693723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16304352.pem
	I0115 11:14:26.830794 1693723 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 15 10:58 /usr/share/ca-certificates/16304352.pem
	I0115 11:14:26.830878 1693723 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 10:58 /usr/share/ca-certificates/16304352.pem
	I0115 11:14:26.830946 1693723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16304352.pem
	I0115 11:14:26.839073 1693723 command_runner.go:130] > 3ec20f2e
	I0115 11:14:26.839658 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16304352.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 11:14:26.851119 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 11:14:26.862362 1693723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:14:26.866947 1693723 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 15 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:14:26.867250 1693723 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 10:51 /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:14:26.867324 1693723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 11:14:26.875551 1693723 command_runner.go:130] > b5213941
	I0115 11:14:26.876012 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 11:14:26.887581 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1630435.pem && ln -fs /usr/share/ca-certificates/1630435.pem /etc/ssl/certs/1630435.pem"
	I0115 11:14:26.899195 1693723 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1630435.pem
	I0115 11:14:26.903457 1693723 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 15 10:58 /usr/share/ca-certificates/1630435.pem
	I0115 11:14:26.903556 1693723 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 10:58 /usr/share/ca-certificates/1630435.pem
	I0115 11:14:26.903652 1693723 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1630435.pem
	I0115 11:14:26.911879 1693723 command_runner.go:130] > 51391683
	I0115 11:14:26.912291 1693723 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1630435.pem /etc/ssl/certs/51391683.0"
	I0115 11:14:26.923712 1693723 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 11:14:26.927948 1693723 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 11:14:26.928041 1693723 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 11:14:26.928150 1693723 ssh_runner.go:195] Run: crio config
	I0115 11:14:26.980975 1693723 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0115 11:14:26.981000 1693723 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0115 11:14:26.981014 1693723 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0115 11:14:26.981019 1693723 command_runner.go:130] > #
	I0115 11:14:26.981028 1693723 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0115 11:14:26.981040 1693723 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0115 11:14:26.981052 1693723 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0115 11:14:26.981062 1693723 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0115 11:14:26.981072 1693723 command_runner.go:130] > # reload'.
	I0115 11:14:26.981081 1693723 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0115 11:14:26.981092 1693723 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0115 11:14:26.981101 1693723 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0115 11:14:26.981112 1693723 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0115 11:14:26.981116 1693723 command_runner.go:130] > [crio]
	I0115 11:14:26.981124 1693723 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0115 11:14:26.981132 1693723 command_runner.go:130] > # containers images, in this directory.
	I0115 11:14:26.981142 1693723 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0115 11:14:26.981155 1693723 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0115 11:14:26.981371 1693723 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0115 11:14:26.981391 1693723 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0115 11:14:26.981400 1693723 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0115 11:14:26.981419 1693723 command_runner.go:130] > # storage_driver = "vfs"
	I0115 11:14:26.981432 1693723 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0115 11:14:26.981450 1693723 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0115 11:14:26.981455 1693723 command_runner.go:130] > # storage_option = [
	I0115 11:14:26.981460 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.981469 1693723 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0115 11:14:26.981480 1693723 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0115 11:14:26.981493 1693723 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0115 11:14:26.981501 1693723 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0115 11:14:26.981508 1693723 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0115 11:14:26.981514 1693723 command_runner.go:130] > # always happen on a node reboot
	I0115 11:14:26.981522 1693723 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0115 11:14:26.981531 1693723 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0115 11:14:26.981539 1693723 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0115 11:14:26.981552 1693723 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0115 11:14:26.981558 1693723 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0115 11:14:26.981567 1693723 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0115 11:14:26.981582 1693723 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0115 11:14:26.981594 1693723 command_runner.go:130] > # internal_wipe = true
	I0115 11:14:26.981600 1693723 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0115 11:14:26.981608 1693723 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0115 11:14:26.981618 1693723 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0115 11:14:26.981627 1693723 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0115 11:14:26.981636 1693723 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0115 11:14:26.981643 1693723 command_runner.go:130] > [crio.api]
	I0115 11:14:26.981650 1693723 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0115 11:14:26.981656 1693723 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0115 11:14:26.981665 1693723 command_runner.go:130] > # IP address on which the stream server will listen.
	I0115 11:14:26.981671 1693723 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0115 11:14:26.981681 1693723 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0115 11:14:26.981688 1693723 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0115 11:14:26.981693 1693723 command_runner.go:130] > # stream_port = "0"
	I0115 11:14:26.981701 1693723 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0115 11:14:26.981931 1693723 command_runner.go:130] > # stream_enable_tls = false
	I0115 11:14:26.981975 1693723 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0115 11:14:26.982052 1693723 command_runner.go:130] > # stream_idle_timeout = ""
	I0115 11:14:26.982071 1693723 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0115 11:14:26.982080 1693723 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0115 11:14:26.982087 1693723 command_runner.go:130] > # minutes.
	I0115 11:14:26.982458 1693723 command_runner.go:130] > # stream_tls_cert = ""
	I0115 11:14:26.982474 1693723 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0115 11:14:26.982482 1693723 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0115 11:14:26.982643 1693723 command_runner.go:130] > # stream_tls_key = ""
	I0115 11:14:26.982658 1693723 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0115 11:14:26.982667 1693723 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0115 11:14:26.982680 1693723 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0115 11:14:26.982933 1693723 command_runner.go:130] > # stream_tls_ca = ""
	I0115 11:14:26.982951 1693723 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 11:14:26.983278 1693723 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0115 11:14:26.983293 1693723 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0115 11:14:26.983561 1693723 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0115 11:14:26.983585 1693723 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0115 11:14:26.983593 1693723 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0115 11:14:26.983601 1693723 command_runner.go:130] > [crio.runtime]
	I0115 11:14:26.983612 1693723 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0115 11:14:26.983619 1693723 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0115 11:14:26.983626 1693723 command_runner.go:130] > # "nofile=1024:2048"
	I0115 11:14:26.983633 1693723 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0115 11:14:26.983853 1693723 command_runner.go:130] > # default_ulimits = [
	I0115 11:14:26.984050 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.984066 1693723 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0115 11:14:26.984467 1693723 command_runner.go:130] > # no_pivot = false
	I0115 11:14:26.984510 1693723 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0115 11:14:26.984533 1693723 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0115 11:14:26.984875 1693723 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0115 11:14:26.984892 1693723 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0115 11:14:26.984899 1693723 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0115 11:14:26.984907 1693723 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 11:14:26.985276 1693723 command_runner.go:130] > # conmon = ""
	I0115 11:14:26.985315 1693723 command_runner.go:130] > # Cgroup setting for conmon
	I0115 11:14:26.985339 1693723 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0115 11:14:26.985511 1693723 command_runner.go:130] > conmon_cgroup = "pod"
	I0115 11:14:26.985548 1693723 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0115 11:14:26.985568 1693723 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0115 11:14:26.985592 1693723 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0115 11:14:26.985671 1693723 command_runner.go:130] > # conmon_env = [
	I0115 11:14:26.985944 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.985972 1693723 command_runner.go:130] > # Additional environment variables to set for all the
	I0115 11:14:26.985996 1693723 command_runner.go:130] > # containers. These are overridden if set in the
	I0115 11:14:26.986030 1693723 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0115 11:14:26.986121 1693723 command_runner.go:130] > # default_env = [
	I0115 11:14:26.986406 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.986424 1693723 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0115 11:14:26.986943 1693723 command_runner.go:130] > # selinux = false
	I0115 11:14:26.986961 1693723 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0115 11:14:26.986971 1693723 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0115 11:14:26.986978 1693723 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0115 11:14:26.987387 1693723 command_runner.go:130] > # seccomp_profile = ""
	I0115 11:14:26.987401 1693723 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0115 11:14:26.987409 1693723 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0115 11:14:26.987419 1693723 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0115 11:14:26.987430 1693723 command_runner.go:130] > # which might increase security.
	I0115 11:14:26.987880 1693723 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0115 11:14:26.987900 1693723 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0115 11:14:26.987908 1693723 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0115 11:14:26.987916 1693723 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0115 11:14:26.987927 1693723 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0115 11:14:26.987936 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:14:26.988387 1693723 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0115 11:14:26.988405 1693723 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0115 11:14:26.988412 1693723 command_runner.go:130] > # the cgroup blockio controller.
	I0115 11:14:26.988775 1693723 command_runner.go:130] > # blockio_config_file = ""
	I0115 11:14:26.988792 1693723 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0115 11:14:26.988797 1693723 command_runner.go:130] > # irqbalance daemon.
	I0115 11:14:26.989276 1693723 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0115 11:14:26.989293 1693723 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0115 11:14:26.989300 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:14:26.989691 1693723 command_runner.go:130] > # rdt_config_file = ""
	I0115 11:14:26.989708 1693723 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0115 11:14:26.989978 1693723 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0115 11:14:26.989992 1693723 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0115 11:14:26.990397 1693723 command_runner.go:130] > # separate_pull_cgroup = ""
	I0115 11:14:26.990414 1693723 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0115 11:14:26.990422 1693723 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0115 11:14:26.990427 1693723 command_runner.go:130] > # will be added.
	I0115 11:14:26.990685 1693723 command_runner.go:130] > # default_capabilities = [
	I0115 11:14:26.991056 1693723 command_runner.go:130] > # 	"CHOWN",
	I0115 11:14:26.991340 1693723 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0115 11:14:26.991675 1693723 command_runner.go:130] > # 	"FSETID",
	I0115 11:14:26.991955 1693723 command_runner.go:130] > # 	"FOWNER",
	I0115 11:14:26.992238 1693723 command_runner.go:130] > # 	"SETGID",
	I0115 11:14:26.992528 1693723 command_runner.go:130] > # 	"SETUID",
	I0115 11:14:26.992803 1693723 command_runner.go:130] > # 	"SETPCAP",
	I0115 11:14:26.993071 1693723 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0115 11:14:26.993352 1693723 command_runner.go:130] > # 	"KILL",
	I0115 11:14:26.993617 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.993632 1693723 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0115 11:14:26.993641 1693723 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0115 11:14:26.994178 1693723 command_runner.go:130] > # add_inheritable_capabilities = true
	I0115 11:14:26.994196 1693723 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0115 11:14:26.994205 1693723 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 11:14:26.994492 1693723 command_runner.go:130] > # default_sysctls = [
	I0115 11:14:26.994791 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.994809 1693723 command_runner.go:130] > # List of devices on the host that a
	I0115 11:14:26.994818 1693723 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0115 11:14:26.995076 1693723 command_runner.go:130] > # allowed_devices = [
	I0115 11:14:26.995397 1693723 command_runner.go:130] > # 	"/dev/fuse",
	I0115 11:14:26.995666 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.995680 1693723 command_runner.go:130] > # List of additional devices. specified as
	I0115 11:14:26.995698 1693723 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0115 11:14:26.995720 1693723 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0115 11:14:26.995731 1693723 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0115 11:14:26.995974 1693723 command_runner.go:130] > # additional_devices = [
	I0115 11:14:26.996235 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.996250 1693723 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0115 11:14:26.996485 1693723 command_runner.go:130] > # cdi_spec_dirs = [
	I0115 11:14:26.996754 1693723 command_runner.go:130] > # 	"/etc/cdi",
	I0115 11:14:26.997012 1693723 command_runner.go:130] > # 	"/var/run/cdi",
	I0115 11:14:26.997256 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.997272 1693723 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0115 11:14:26.997281 1693723 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0115 11:14:26.997291 1693723 command_runner.go:130] > # Defaults to false.
	I0115 11:14:26.997764 1693723 command_runner.go:130] > # device_ownership_from_security_context = false
	I0115 11:14:26.997781 1693723 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0115 11:14:26.997789 1693723 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0115 11:14:26.998050 1693723 command_runner.go:130] > # hooks_dir = [
	I0115 11:14:26.998340 1693723 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0115 11:14:26.998592 1693723 command_runner.go:130] > # ]
	I0115 11:14:26.998608 1693723 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0115 11:14:26.998617 1693723 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0115 11:14:26.998625 1693723 command_runner.go:130] > # its default mounts from the following two files:
	I0115 11:14:26.998629 1693723 command_runner.go:130] > #
	I0115 11:14:26.998643 1693723 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0115 11:14:26.998654 1693723 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0115 11:14:26.998661 1693723 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0115 11:14:26.998665 1693723 command_runner.go:130] > #
	I0115 11:14:26.998680 1693723 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0115 11:14:26.998688 1693723 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0115 11:14:26.998700 1693723 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0115 11:14:26.998707 1693723 command_runner.go:130] > #      only add mounts it finds in this file.
	I0115 11:14:26.998711 1693723 command_runner.go:130] > #
	I0115 11:14:26.999111 1693723 command_runner.go:130] > # default_mounts_file = ""
	I0115 11:14:26.999127 1693723 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0115 11:14:26.999137 1693723 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0115 11:14:26.999618 1693723 command_runner.go:130] > # pids_limit = 0
	I0115 11:14:26.999635 1693723 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0115 11:14:26.999643 1693723 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0115 11:14:26.999651 1693723 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0115 11:14:26.999661 1693723 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0115 11:14:27.000138 1693723 command_runner.go:130] > # log_size_max = -1
	I0115 11:14:27.000156 1693723 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0115 11:14:27.000628 1693723 command_runner.go:130] > # log_to_journald = false
	I0115 11:14:27.000645 1693723 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0115 11:14:27.001113 1693723 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0115 11:14:27.001130 1693723 command_runner.go:130] > # Path to directory for container attach sockets.
	I0115 11:14:27.001606 1693723 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0115 11:14:27.001629 1693723 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0115 11:14:27.001870 1693723 command_runner.go:130] > # bind_mount_prefix = ""
	I0115 11:14:27.001881 1693723 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0115 11:14:27.002384 1693723 command_runner.go:130] > # read_only = false
	I0115 11:14:27.002408 1693723 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0115 11:14:27.002418 1693723 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0115 11:14:27.002423 1693723 command_runner.go:130] > # live configuration reload.
	I0115 11:14:27.002898 1693723 command_runner.go:130] > # log_level = "info"
	I0115 11:14:27.002910 1693723 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0115 11:14:27.002917 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:14:27.003264 1693723 command_runner.go:130] > # log_filter = ""
	I0115 11:14:27.003279 1693723 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0115 11:14:27.003286 1693723 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0115 11:14:27.003292 1693723 command_runner.go:130] > # separated by comma.
	I0115 11:14:27.003642 1693723 command_runner.go:130] > # uid_mappings = ""
	I0115 11:14:27.003659 1693723 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0115 11:14:27.003668 1693723 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0115 11:14:27.003673 1693723 command_runner.go:130] > # separated by comma.
	I0115 11:14:27.004034 1693723 command_runner.go:130] > # gid_mappings = ""
	I0115 11:14:27.004052 1693723 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0115 11:14:27.004061 1693723 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 11:14:27.004074 1693723 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 11:14:27.005481 1693723 command_runner.go:130] > # minimum_mappable_uid = -1
	I0115 11:14:27.005503 1693723 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0115 11:14:27.005511 1693723 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0115 11:14:27.005519 1693723 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0115 11:14:27.005525 1693723 command_runner.go:130] > # minimum_mappable_gid = -1
	I0115 11:14:27.005532 1693723 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0115 11:14:27.005541 1693723 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0115 11:14:27.005548 1693723 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0115 11:14:27.005557 1693723 command_runner.go:130] > # ctr_stop_timeout = 30
	I0115 11:14:27.005565 1693723 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0115 11:14:27.005574 1693723 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0115 11:14:27.005581 1693723 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0115 11:14:27.005592 1693723 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0115 11:14:27.005598 1693723 command_runner.go:130] > # drop_infra_ctr = true
	I0115 11:14:27.005605 1693723 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0115 11:14:27.005615 1693723 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0115 11:14:27.005625 1693723 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0115 11:14:27.005633 1693723 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0115 11:14:27.005641 1693723 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0115 11:14:27.005650 1693723 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0115 11:14:27.005657 1693723 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0115 11:14:27.005666 1693723 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0115 11:14:27.005674 1693723 command_runner.go:130] > # pinns_path = ""
	I0115 11:14:27.005682 1693723 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0115 11:14:27.005690 1693723 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0115 11:14:27.005702 1693723 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0115 11:14:27.005707 1693723 command_runner.go:130] > # default_runtime = "runc"
	I0115 11:14:27.005716 1693723 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0115 11:14:27.005726 1693723 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0115 11:14:27.005738 1693723 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0115 11:14:27.005747 1693723 command_runner.go:130] > # creation as a file is not desired either.
	I0115 11:14:27.005757 1693723 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0115 11:14:27.005764 1693723 command_runner.go:130] > # the hostname is being managed dynamically.
	I0115 11:14:27.005773 1693723 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0115 11:14:27.005781 1693723 command_runner.go:130] > # ]
	I0115 11:14:27.005791 1693723 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0115 11:14:27.005802 1693723 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0115 11:14:27.005810 1693723 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0115 11:14:27.005821 1693723 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0115 11:14:27.005828 1693723 command_runner.go:130] > #
	I0115 11:14:27.005834 1693723 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0115 11:14:27.005843 1693723 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0115 11:14:27.005848 1693723 command_runner.go:130] > #  runtime_type = "oci"
	I0115 11:14:27.005857 1693723 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0115 11:14:27.005863 1693723 command_runner.go:130] > #  privileged_without_host_devices = false
	I0115 11:14:27.005868 1693723 command_runner.go:130] > #  allowed_annotations = []
	I0115 11:14:27.005873 1693723 command_runner.go:130] > # Where:
	I0115 11:14:27.005882 1693723 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0115 11:14:27.005889 1693723 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0115 11:14:27.005901 1693723 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0115 11:14:27.005908 1693723 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0115 11:14:27.005913 1693723 command_runner.go:130] > #   in $PATH.
	I0115 11:14:27.005924 1693723 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0115 11:14:27.005931 1693723 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0115 11:14:27.005940 1693723 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0115 11:14:27.005957 1693723 command_runner.go:130] > #   state.
	I0115 11:14:27.005965 1693723 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0115 11:14:27.005975 1693723 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0115 11:14:27.005982 1693723 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0115 11:14:27.005993 1693723 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0115 11:14:27.006001 1693723 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0115 11:14:27.006009 1693723 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0115 11:14:27.006017 1693723 command_runner.go:130] > #   The currently recognized values are:
	I0115 11:14:27.006026 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0115 11:14:27.006037 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0115 11:14:27.006045 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0115 11:14:27.006054 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0115 11:14:27.006066 1693723 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0115 11:14:27.006075 1693723 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0115 11:14:27.006085 1693723 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0115 11:14:27.006093 1693723 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0115 11:14:27.006100 1693723 command_runner.go:130] > #   should be moved to the container's cgroup
	I0115 11:14:27.006108 1693723 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0115 11:14:27.006116 1693723 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0115 11:14:27.006124 1693723 command_runner.go:130] > runtime_type = "oci"
	I0115 11:14:27.006129 1693723 command_runner.go:130] > runtime_root = "/run/runc"
	I0115 11:14:27.006134 1693723 command_runner.go:130] > runtime_config_path = ""
	I0115 11:14:27.006139 1693723 command_runner.go:130] > monitor_path = ""
	I0115 11:14:27.006146 1693723 command_runner.go:130] > monitor_cgroup = ""
	I0115 11:14:27.006151 1693723 command_runner.go:130] > monitor_exec_cgroup = ""
	I0115 11:14:27.006171 1693723 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0115 11:14:27.006179 1693723 command_runner.go:130] > # running containers
	I0115 11:14:27.006184 1693723 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0115 11:14:27.006192 1693723 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0115 11:14:27.006203 1693723 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0115 11:14:27.006210 1693723 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0115 11:14:27.006219 1693723 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0115 11:14:27.006226 1693723 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0115 11:14:27.006234 1693723 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0115 11:14:27.006240 1693723 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0115 11:14:27.006246 1693723 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0115 11:14:27.006253 1693723 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0115 11:14:27.006264 1693723 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0115 11:14:27.006271 1693723 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0115 11:14:27.006304 1693723 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0115 11:14:27.006314 1693723 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0115 11:14:27.006327 1693723 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0115 11:14:27.006343 1693723 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0115 11:14:27.006354 1693723 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0115 11:14:27.006367 1693723 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0115 11:14:27.006374 1693723 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0115 11:14:27.006387 1693723 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0115 11:14:27.006392 1693723 command_runner.go:130] > # Example:
	I0115 11:14:27.006398 1693723 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0115 11:14:27.006406 1693723 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0115 11:14:27.006412 1693723 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0115 11:14:27.006421 1693723 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0115 11:14:27.006426 1693723 command_runner.go:130] > # cpuset = 0
	I0115 11:14:27.006431 1693723 command_runner.go:130] > # cpushares = "0-1"
	I0115 11:14:27.006438 1693723 command_runner.go:130] > # Where:
	I0115 11:14:27.006444 1693723 command_runner.go:130] > # The workload name is workload-type.
	I0115 11:14:27.006456 1693723 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0115 11:14:27.006463 1693723 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0115 11:14:27.006473 1693723 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0115 11:14:27.006484 1693723 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0115 11:14:27.006494 1693723 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0115 11:14:27.006498 1693723 command_runner.go:130] > # 
	I0115 11:14:27.006508 1693723 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0115 11:14:27.006512 1693723 command_runner.go:130] > #
	I0115 11:14:27.006522 1693723 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0115 11:14:27.006530 1693723 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0115 11:14:27.006537 1693723 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0115 11:14:27.006548 1693723 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0115 11:14:27.006556 1693723 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0115 11:14:27.006561 1693723 command_runner.go:130] > [crio.image]
	I0115 11:14:27.006574 1693723 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0115 11:14:27.006581 1693723 command_runner.go:130] > # default_transport = "docker://"
	I0115 11:14:27.006590 1693723 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0115 11:14:27.006602 1693723 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0115 11:14:27.006607 1693723 command_runner.go:130] > # global_auth_file = ""
	I0115 11:14:27.006614 1693723 command_runner.go:130] > # The image used to instantiate infra containers.
	I0115 11:14:27.006623 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:14:27.006630 1693723 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0115 11:14:27.006638 1693723 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0115 11:14:27.006647 1693723 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0115 11:14:27.006654 1693723 command_runner.go:130] > # This option supports live configuration reload.
	I0115 11:14:27.006662 1693723 command_runner.go:130] > # pause_image_auth_file = ""
	I0115 11:14:27.006670 1693723 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0115 11:14:27.006678 1693723 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0115 11:14:27.006690 1693723 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0115 11:14:27.006698 1693723 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0115 11:14:27.006707 1693723 command_runner.go:130] > # pause_command = "/pause"
	I0115 11:14:27.006714 1693723 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0115 11:14:27.006723 1693723 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0115 11:14:27.006730 1693723 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0115 11:14:27.006750 1693723 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0115 11:14:27.006757 1693723 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0115 11:14:27.006763 1693723 command_runner.go:130] > # signature_policy = ""
	I0115 11:14:27.006773 1693723 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0115 11:14:27.006781 1693723 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0115 11:14:27.006786 1693723 command_runner.go:130] > # changing them here.
	I0115 11:14:27.006794 1693723 command_runner.go:130] > # insecure_registries = [
	I0115 11:14:27.006798 1693723 command_runner.go:130] > # ]
	I0115 11:14:27.006806 1693723 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0115 11:14:27.006812 1693723 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0115 11:14:27.006818 1693723 command_runner.go:130] > # image_volumes = "mkdir"
	I0115 11:14:27.006830 1693723 command_runner.go:130] > # Temporary directory to use for storing big files
	I0115 11:14:27.006836 1693723 command_runner.go:130] > # big_files_temporary_dir = ""
	I0115 11:14:27.006844 1693723 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0115 11:14:27.006856 1693723 command_runner.go:130] > # CNI plugins.
	I0115 11:14:27.006861 1693723 command_runner.go:130] > [crio.network]
	I0115 11:14:27.006869 1693723 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0115 11:14:27.006880 1693723 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0115 11:14:27.006886 1693723 command_runner.go:130] > # cni_default_network = ""
	I0115 11:14:27.006894 1693723 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0115 11:14:27.006905 1693723 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0115 11:14:27.006912 1693723 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0115 11:14:27.006917 1693723 command_runner.go:130] > # plugin_dirs = [
	I0115 11:14:27.006925 1693723 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0115 11:14:27.006929 1693723 command_runner.go:130] > # ]
	I0115 11:14:27.006937 1693723 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0115 11:14:27.006946 1693723 command_runner.go:130] > [crio.metrics]
	I0115 11:14:27.006952 1693723 command_runner.go:130] > # Globally enable or disable metrics support.
	I0115 11:14:27.006961 1693723 command_runner.go:130] > # enable_metrics = false
	I0115 11:14:27.006967 1693723 command_runner.go:130] > # Specify enabled metrics collectors.
	I0115 11:14:27.006972 1693723 command_runner.go:130] > # Per default all metrics are enabled.
	I0115 11:14:27.006980 1693723 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0115 11:14:27.006988 1693723 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0115 11:14:27.006995 1693723 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0115 11:14:27.007006 1693723 command_runner.go:130] > # metrics_collectors = [
	I0115 11:14:27.007011 1693723 command_runner.go:130] > # 	"operations",
	I0115 11:14:27.007018 1693723 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0115 11:14:27.007028 1693723 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0115 11:14:27.007033 1693723 command_runner.go:130] > # 	"operations_errors",
	I0115 11:14:27.007041 1693723 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0115 11:14:27.007050 1693723 command_runner.go:130] > # 	"image_pulls_by_name",
	I0115 11:14:27.007056 1693723 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0115 11:14:27.007062 1693723 command_runner.go:130] > # 	"image_pulls_failures",
	I0115 11:14:27.007067 1693723 command_runner.go:130] > # 	"image_pulls_successes",
	I0115 11:14:27.007072 1693723 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0115 11:14:27.007077 1693723 command_runner.go:130] > # 	"image_layer_reuse",
	I0115 11:14:27.007083 1693723 command_runner.go:130] > # 	"containers_oom_total",
	I0115 11:14:27.007088 1693723 command_runner.go:130] > # 	"containers_oom",
	I0115 11:14:27.007097 1693723 command_runner.go:130] > # 	"processes_defunct",
	I0115 11:14:27.007102 1693723 command_runner.go:130] > # 	"operations_total",
	I0115 11:14:27.007108 1693723 command_runner.go:130] > # 	"operations_latency_seconds",
	I0115 11:14:27.007117 1693723 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0115 11:14:27.007123 1693723 command_runner.go:130] > # 	"operations_errors_total",
	I0115 11:14:27.007133 1693723 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0115 11:14:27.007140 1693723 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0115 11:14:27.007145 1693723 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0115 11:14:27.007151 1693723 command_runner.go:130] > # 	"image_pulls_success_total",
	I0115 11:14:27.007157 1693723 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0115 11:14:27.007162 1693723 command_runner.go:130] > # 	"containers_oom_count_total",
	I0115 11:14:27.007169 1693723 command_runner.go:130] > # ]
	I0115 11:14:27.007176 1693723 command_runner.go:130] > # The port on which the metrics server will listen.
	I0115 11:14:27.007183 1693723 command_runner.go:130] > # metrics_port = 9090
	I0115 11:14:27.007190 1693723 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0115 11:14:27.007195 1693723 command_runner.go:130] > # metrics_socket = ""
	I0115 11:14:27.007204 1693723 command_runner.go:130] > # The certificate for the secure metrics server.
	I0115 11:14:27.007212 1693723 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0115 11:14:27.007220 1693723 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0115 11:14:27.007226 1693723 command_runner.go:130] > # certificate on any modification event.
	I0115 11:14:27.007231 1693723 command_runner.go:130] > # metrics_cert = ""
	I0115 11:14:27.007238 1693723 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0115 11:14:27.007248 1693723 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0115 11:14:27.007255 1693723 command_runner.go:130] > # metrics_key = ""
	I0115 11:14:27.007263 1693723 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0115 11:14:27.007271 1693723 command_runner.go:130] > [crio.tracing]
	I0115 11:14:27.007279 1693723 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0115 11:14:27.007284 1693723 command_runner.go:130] > # enable_tracing = false
	I0115 11:14:27.007296 1693723 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0115 11:14:27.007301 1693723 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0115 11:14:27.007308 1693723 command_runner.go:130] > # Number of samples to collect per million spans.
	I0115 11:14:27.007314 1693723 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0115 11:14:27.007321 1693723 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0115 11:14:27.007330 1693723 command_runner.go:130] > [crio.stats]
	I0115 11:14:27.007338 1693723 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0115 11:14:27.007349 1693723 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0115 11:14:27.007355 1693723 command_runner.go:130] > # stats_collection_period = 0
	I0115 11:14:27.009495 1693723 command_runner.go:130] ! time="2024-01-15 11:14:26.978365689Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0115 11:14:27.009557 1693723 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0115 11:14:27.009649 1693723 cni.go:84] Creating CNI manager for ""
	I0115 11:14:27.009661 1693723 cni.go:136] 2 nodes found, recommending kindnet
	I0115 11:14:27.009671 1693723 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 11:14:27.009692 1693723 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-279658 NodeName:multinode-279658-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 11:14:27.009835 1693723 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-279658-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 11:14:27.009897 1693723 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-279658-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-279658 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 11:14:27.009973 1693723 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 11:14:27.020033 1693723 command_runner.go:130] > kubeadm
	I0115 11:14:27.020057 1693723 command_runner.go:130] > kubectl
	I0115 11:14:27.020063 1693723 command_runner.go:130] > kubelet
	I0115 11:14:27.021215 1693723 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 11:14:27.021286 1693723 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0115 11:14:27.033874 1693723 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0115 11:14:27.058675 1693723 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 11:14:27.089978 1693723 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0115 11:14:27.095057 1693723 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 11:14:27.109672 1693723 host.go:66] Checking if "multinode-279658" exists ...
	I0115 11:14:27.109971 1693723 start.go:304] JoinCluster: &{Name:multinode-279658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-279658 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:14:27.110066 1693723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0115 11:14:27.110123 1693723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:14:27.110520 1693723 config.go:182] Loaded profile config "multinode-279658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:14:27.129073 1693723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34794 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa Username:docker}
	I0115 11:14:27.306019 1693723 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token nso921.vrcfi3e9ryhodew4 --discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 
	I0115 11:14:27.306064 1693723 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 11:14:27.306100 1693723 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nso921.vrcfi3e9ryhodew4 --discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-279658-m02"
	I0115 11:14:27.352487 1693723 command_runner.go:130] > [preflight] Running pre-flight checks
	I0115 11:14:27.394549 1693723 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0115 11:14:27.394576 1693723 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0115 11:14:27.394583 1693723 command_runner.go:130] > OS: Linux
	I0115 11:14:27.394590 1693723 command_runner.go:130] > CGROUPS_CPU: enabled
	I0115 11:14:27.394608 1693723 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0115 11:14:27.394618 1693723 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0115 11:14:27.394624 1693723 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0115 11:14:27.394634 1693723 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0115 11:14:27.394641 1693723 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0115 11:14:27.394655 1693723 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0115 11:14:27.394661 1693723 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0115 11:14:27.394671 1693723 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0115 11:14:27.506787 1693723 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0115 11:14:27.506816 1693723 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0115 11:14:27.540556 1693723 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 11:14:27.540757 1693723 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 11:14:27.540862 1693723 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0115 11:14:27.655120 1693723 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0115 11:14:30.669269 1693723 command_runner.go:130] > This node has joined the cluster:
	I0115 11:14:30.669295 1693723 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0115 11:14:30.669304 1693723 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0115 11:14:30.669312 1693723 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0115 11:14:30.672265 1693723 command_runner.go:130] ! W0115 11:14:27.352112    1024 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0115 11:14:30.672305 1693723 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0115 11:14:30.672322 1693723 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 11:14:30.672343 1693723 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nso921.vrcfi3e9ryhodew4 --discovery-token-ca-cert-hash sha256:9fc86a3add6326d4608da878bd8e422e94962742c71a62ee80a4f994be1f8a81 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-279658-m02": (3.366225366s)
	I0115 11:14:30.672364 1693723 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0115 11:14:30.897796 1693723 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0115 11:14:30.897889 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23 minikube.k8s.io/name=multinode-279658 minikube.k8s.io/updated_at=2024_01_15T11_14_30_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 11:14:31.022152 1693723 command_runner.go:130] > node/multinode-279658-m02 labeled
	I0115 11:14:31.025971 1693723 start.go:306] JoinCluster complete in 3.915993556s
	I0115 11:14:31.026002 1693723 cni.go:84] Creating CNI manager for ""
	I0115 11:14:31.026010 1693723 cni.go:136] 2 nodes found, recommending kindnet
	I0115 11:14:31.026068 1693723 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 11:14:31.030903 1693723 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0115 11:14:31.030926 1693723 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0115 11:14:31.030934 1693723 command_runner.go:130] > Device: 3ah/58d	Inode: 1826992     Links: 1
	I0115 11:14:31.030942 1693723 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0115 11:14:31.030949 1693723 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0115 11:14:31.030956 1693723 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0115 11:14:31.030961 1693723 command_runner.go:130] > Change: 2024-01-15 10:51:11.139562617 +0000
	I0115 11:14:31.030968 1693723 command_runner.go:130] >  Birth: 2024-01-15 10:51:11.091563836 +0000
	I0115 11:14:31.031022 1693723 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 11:14:31.031036 1693723 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 11:14:31.055732 1693723 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 11:14:31.483143 1693723 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0115 11:14:31.487441 1693723 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0115 11:14:31.490500 1693723 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0115 11:14:31.503623 1693723 command_runner.go:130] > daemonset.apps/kindnet configured
	I0115 11:14:31.511021 1693723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:14:31.511312 1693723 kapi.go:59] client config for multinode-279658: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.key", CAFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9dd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 11:14:31.511630 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 11:14:31.511644 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:31.511654 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:31.511661 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:31.514324 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:31.514343 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:31.514351 1693723 round_trippers.go:580]     Audit-Id: 579eaabb-143b-4b4c-817f-a62b056ed73c
	I0115 11:14:31.514357 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:31.514364 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:31.514370 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:31.514376 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:31.514383 1693723 round_trippers.go:580]     Content-Length: 291
	I0115 11:14:31.514389 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:31 GMT
	I0115 11:14:31.514580 1693723 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ce1f76ee-403c-4b9e-85a1-54036c2cd680","resourceVersion":"446","creationTimestamp":"2024-01-15T11:13:28Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0115 11:14:31.514706 1693723 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ce1f76ee-403c-4b9e-85a1-54036c2cd680","resourceVersion":"446","creationTimestamp":"2024-01-15T11:13:28Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0115 11:14:31.514766 1693723 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 11:14:31.514773 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:31.514781 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:31.514788 1693723 round_trippers.go:473]     Content-Type: application/json
	I0115 11:14:31.514794 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:31.525443 1693723 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0115 11:14:31.525464 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:31.525472 1693723 round_trippers.go:580]     Audit-Id: 6cd87592-229a-4645-abf1-6d413c794c06
	I0115 11:14:31.525479 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:31.525485 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:31.525491 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:31.525498 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:31.525504 1693723 round_trippers.go:580]     Content-Length: 291
	I0115 11:14:31.525516 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:31 GMT
	I0115 11:14:31.525747 1693723 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ce1f76ee-403c-4b9e-85a1-54036c2cd680","resourceVersion":"493","creationTimestamp":"2024-01-15T11:13:28Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0115 11:14:32.012609 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0115 11:14:32.012634 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:32.012644 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:32.012652 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:32.015248 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:32.015272 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:32.015281 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:32.015288 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:32.015294 1693723 round_trippers.go:580]     Content-Length: 291
	I0115 11:14:32.015303 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:32 GMT
	I0115 11:14:32.015309 1693723 round_trippers.go:580]     Audit-Id: b137e008-c51b-4d18-a523-c4bf250aec09
	I0115 11:14:32.015316 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:32.015327 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:32.015349 1693723 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ce1f76ee-403c-4b9e-85a1-54036c2cd680","resourceVersion":"505","creationTimestamp":"2024-01-15T11:13:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0115 11:14:32.015442 1693723 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-279658" context rescaled to 1 replicas
	I0115 11:14:32.015472 1693723 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0115 11:14:32.018918 1693723 out.go:177] * Verifying Kubernetes components...
	I0115 11:14:32.020891 1693723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:14:32.036420 1693723 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:14:32.036729 1693723 kapi.go:59] client config for multinode-279658: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.crt", KeyFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/multinode-279658/client.key", CAFile:"/home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9dd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 11:14:32.036996 1693723 node_ready.go:35] waiting up to 6m0s for node "multinode-279658-m02" to be "Ready" ...
	I0115 11:14:32.037067 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:32.037078 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:32.037087 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:32.037095 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:32.039683 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:32.039705 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:32.039714 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:32.039720 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:32.039756 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:32.039770 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:32.039778 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:32 GMT
	I0115 11:14:32.039787 1693723 round_trippers.go:580]     Audit-Id: 76e4156d-46ad-4554-b39f-b279cfc2fc14
	I0115 11:14:32.039951 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:32.538179 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:32.538203 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:32.538218 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:32.538225 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:32.540753 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:32.540779 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:32.540790 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:32.540797 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:32.540803 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:32.540809 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:32.540816 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:32 GMT
	I0115 11:14:32.540823 1693723 round_trippers.go:580]     Audit-Id: 89d1c3a6-c31b-4cf6-9a0e-e4a01a4ef326
	I0115 11:14:32.540944 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:33.038019 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:33.038047 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:33.038058 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:33.038066 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:33.040746 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:33.040771 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:33.040781 1693723 round_trippers.go:580]     Audit-Id: de7e4165-7b20-4079-ad09-f88f67e5fbe2
	I0115 11:14:33.040788 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:33.040794 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:33.040801 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:33.040807 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:33.040813 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:33 GMT
	I0115 11:14:33.041218 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:33.537886 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:33.537919 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:33.537929 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:33.537937 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:33.540484 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:33.540506 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:33.540515 1693723 round_trippers.go:580]     Audit-Id: e7c5c1e2-743c-4ea2-8530-6b19e093fbd3
	I0115 11:14:33.540521 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:33.540527 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:33.540534 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:33.540540 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:33.540547 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:33 GMT
	I0115 11:14:33.540750 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:34.037280 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:34.037303 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:34.037315 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:34.037322 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:34.039939 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:34.039960 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:34.039969 1693723 round_trippers.go:580]     Audit-Id: 0533a01b-1e5a-4937-b876-aad9612c41c6
	I0115 11:14:34.039975 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:34.039981 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:34.039987 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:34.039993 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:34.040000 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:34 GMT
	I0115 11:14:34.040118 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:34.040493 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:34.537812 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:34.537836 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:34.537846 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:34.537854 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:34.540417 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:34.540444 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:34.540452 1693723 round_trippers.go:580]     Audit-Id: 33152a89-6c62-47d3-ab92-b3d35729af7a
	I0115 11:14:34.540459 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:34.540465 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:34.540471 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:34.540477 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:34.540488 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:34 GMT
	I0115 11:14:34.540593 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:35.037247 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:35.037272 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:35.037282 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:35.037290 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:35.039887 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:35.039917 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:35.039927 1693723 round_trippers.go:580]     Audit-Id: b8539b54-a0d6-46de-9d6c-41438473654b
	I0115 11:14:35.039934 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:35.039940 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:35.039946 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:35.039952 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:35.039964 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:35 GMT
	I0115 11:14:35.040124 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:35.537200 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:35.537241 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:35.537252 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:35.537259 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:35.539889 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:35.539917 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:35.539928 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:35.539935 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:35.539942 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:35 GMT
	I0115 11:14:35.539949 1693723 round_trippers.go:580]     Audit-Id: a71459c3-3ace-4cb1-8eda-ab0595d98031
	I0115 11:14:35.539956 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:35.539962 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:35.540184 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:36.037802 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:36.037825 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:36.037835 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:36.037843 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:36.040423 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:36.040453 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:36.040462 1693723 round_trippers.go:580]     Audit-Id: ebf03b8d-020f-4557-be9d-478e30c30778
	I0115 11:14:36.040468 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:36.040474 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:36.040481 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:36.040487 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:36.040499 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:36 GMT
	I0115 11:14:36.041173 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:36.041562 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:36.537276 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:36.537303 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:36.537313 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:36.537320 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:36.539932 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:36.539961 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:36.539984 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:36.539991 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:36.539999 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:36.540007 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:36 GMT
	I0115 11:14:36.540018 1693723 round_trippers.go:580]     Audit-Id: 9cbcf2ae-aaf8-4aee-a952-b6cee904dea9
	I0115 11:14:36.540024 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:36.540457 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:37.038203 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:37.038243 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:37.038253 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:37.038261 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:37.041119 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:37.041154 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:37.041166 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:37.041173 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:37.041180 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:37.041187 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:37.041193 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:37 GMT
	I0115 11:14:37.041200 1693723 round_trippers.go:580]     Audit-Id: eb4dab43-eb27-43ef-b66b-369a872f54eb
	I0115 11:14:37.041326 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:37.537442 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:37.537497 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:37.537508 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:37.537516 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:37.540063 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:37.540089 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:37.540098 1693723 round_trippers.go:580]     Audit-Id: 5a29c3b5-076f-480a-ab6e-728507396042
	I0115 11:14:37.540105 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:37.540111 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:37.540117 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:37.540124 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:37.540130 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:37 GMT
	I0115 11:14:37.540241 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:38.037319 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:38.037342 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:38.037351 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:38.037359 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:38.039947 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:38.039983 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:38.039992 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:38.039999 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:38.040005 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:38.040023 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:38.040063 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:38 GMT
	I0115 11:14:38.040070 1693723 round_trippers.go:580]     Audit-Id: 1689fa66-b6d9-4bbc-bdec-32992c69bd4a
	I0115 11:14:38.040331 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:38.537257 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:38.537280 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:38.537289 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:38.537297 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:38.539757 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:38.539779 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:38.539787 1693723 round_trippers.go:580]     Audit-Id: 6a86b3b3-0813-4898-91b0-b33c2f9303b1
	I0115 11:14:38.539793 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:38.539799 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:38.539806 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:38.539812 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:38.539818 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:38 GMT
	I0115 11:14:38.539920 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:38.540332 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:39.038160 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:39.038194 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:39.038205 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:39.038213 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:39.040807 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:39.040834 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:39.040850 1693723 round_trippers.go:580]     Audit-Id: bc2e85b7-0266-4708-9100-1f76737eff39
	I0115 11:14:39.040858 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:39.040865 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:39.040872 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:39.040882 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:39.040890 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:39 GMT
	I0115 11:14:39.041270 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:39.537913 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:39.537938 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:39.537947 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:39.537955 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:39.540474 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:39.540495 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:39.540503 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:39.540509 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:39.540516 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:39.540522 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:39 GMT
	I0115 11:14:39.540528 1693723 round_trippers.go:580]     Audit-Id: 5dc20980-1a0a-453b-a2f9-e058979c1b46
	I0115 11:14:39.540534 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:39.540658 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:40.037838 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:40.037881 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:40.037891 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:40.037899 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:40.040989 1693723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 11:14:40.041024 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:40.041033 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:40.041040 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:40.041046 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:40.041053 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:40.041064 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:40 GMT
	I0115 11:14:40.041076 1693723 round_trippers.go:580]     Audit-Id: 23979c8b-54f1-4a62-951c-b85fb1b569a8
	I0115 11:14:40.041302 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"507","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0115 11:14:40.537468 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:40.537495 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:40.537505 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:40.537512 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:40.540062 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:40.540091 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:40.540100 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:40 GMT
	I0115 11:14:40.540107 1693723 round_trippers.go:580]     Audit-Id: d8bfa687-d255-49fe-8900-fcb2731969e1
	I0115 11:14:40.540113 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:40.540119 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:40.540125 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:40.540137 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:40.540791 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:40.541185 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:41.037916 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:41.037939 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:41.037949 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:41.037957 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:41.040369 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:41.040393 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:41.040402 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:41 GMT
	I0115 11:14:41.040408 1693723 round_trippers.go:580]     Audit-Id: 18911667-af37-4766-b088-5927c4aed96b
	I0115 11:14:41.040415 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:41.040421 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:41.040428 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:41.040438 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:41.040663 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:41.537811 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:41.537833 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:41.537845 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:41.537852 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:41.540573 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:41.540595 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:41.540604 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:41.540611 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:41.540618 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:41 GMT
	I0115 11:14:41.540624 1693723 round_trippers.go:580]     Audit-Id: a370eec2-f459-4880-a9f4-8a261c1bcf42
	I0115 11:14:41.540630 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:41.540637 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:41.540794 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:42.037362 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:42.037390 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:42.037401 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:42.037409 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:42.040327 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:42.040350 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:42.040359 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:42.040366 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:42.040373 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:42.040379 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:42 GMT
	I0115 11:14:42.040386 1693723 round_trippers.go:580]     Audit-Id: 8ae6eec9-85a8-4163-9002-0e2e757481e0
	I0115 11:14:42.040392 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:42.040507 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:42.537831 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:42.537857 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:42.537867 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:42.537874 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:42.540478 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:42.540503 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:42.540513 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:42.540520 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:42.540527 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:42 GMT
	I0115 11:14:42.540533 1693723 round_trippers.go:580]     Audit-Id: 3363c444-1b7e-4d20-b744-571c8fe135e0
	I0115 11:14:42.540539 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:42.540546 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:42.540659 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:43.037561 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:43.037587 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:43.037598 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:43.037605 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:43.040181 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:43.040220 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:43.040229 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:43.040237 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:43 GMT
	I0115 11:14:43.040244 1693723 round_trippers.go:580]     Audit-Id: e2ed5870-2153-4d25-94cc-ef38682e84cd
	I0115 11:14:43.040251 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:43.040257 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:43.040264 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:43.040627 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:43.041064 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:43.538065 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:43.538090 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:43.538101 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:43.538108 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:43.540638 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:43.540660 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:43.540668 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:43.540675 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:43.540681 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:43.540687 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:43.540693 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:43 GMT
	I0115 11:14:43.540699 1693723 round_trippers.go:580]     Audit-Id: 2fece557-36d5-46d0-ba4c-a305a69cd4e0
	I0115 11:14:43.540818 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:44.037895 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:44.037916 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:44.037926 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:44.037939 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:44.040590 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:44.040616 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:44.040626 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:44.040633 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:44.040640 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:44.040646 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:44 GMT
	I0115 11:14:44.040652 1693723 round_trippers.go:580]     Audit-Id: bd4ff1a6-7b95-495b-b7db-530f6096b9ed
	I0115 11:14:44.040659 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:44.040776 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:44.537923 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:44.537951 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:44.537962 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:44.537970 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:44.540542 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:44.540566 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:44.540574 1693723 round_trippers.go:580]     Audit-Id: 837e2672-aa64-4745-bc6b-fe2efa7dcadd
	I0115 11:14:44.540581 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:44.540587 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:44.540593 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:44.540600 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:44.540607 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:44 GMT
	I0115 11:14:44.540719 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:45.037323 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:45.037353 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:45.037364 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:45.037372 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:45.040894 1693723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 11:14:45.040920 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:45.040930 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:45 GMT
	I0115 11:14:45.040937 1693723 round_trippers.go:580]     Audit-Id: a0150f3d-c0f6-4297-bfef-5936388c02e6
	I0115 11:14:45.040943 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:45.040950 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:45.040956 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:45.040963 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:45.041296 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:45.041718 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:45.537665 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:45.537687 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:45.537697 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:45.537704 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:45.540603 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:45.540624 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:45.540632 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:45.540638 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:45.540644 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:45.540650 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:45.540657 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:45 GMT
	I0115 11:14:45.540663 1693723 round_trippers.go:580]     Audit-Id: 0e3219fe-002e-42d1-8bbb-6ea6f01a4d7b
	I0115 11:14:45.540786 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:46.037689 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:46.037714 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:46.037726 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:46.037734 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:46.040279 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:46.040301 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:46.040309 1693723 round_trippers.go:580]     Audit-Id: 72495506-d52b-424a-a8a4-e4f66efc9286
	I0115 11:14:46.040316 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:46.040322 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:46.040328 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:46.040335 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:46.040342 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:46 GMT
	I0115 11:14:46.040464 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:46.538109 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:46.538135 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:46.538152 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:46.538160 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:46.540659 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:46.540694 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:46.540703 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:46.540709 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:46.540716 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:46.540723 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:46 GMT
	I0115 11:14:46.540734 1693723 round_trippers.go:580]     Audit-Id: 6edfe47a-fb02-46da-a69a-884e44197c9f
	I0115 11:14:46.540747 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:46.541006 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:47.037245 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:47.037270 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:47.037281 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:47.037289 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:47.039850 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:47.039870 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:47.039878 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:47.039885 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:47 GMT
	I0115 11:14:47.039891 1693723 round_trippers.go:580]     Audit-Id: afe3463e-fd35-435c-848f-2afb00a45279
	I0115 11:14:47.039897 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:47.039903 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:47.039910 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:47.040074 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:47.537889 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:47.537914 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:47.537925 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:47.537932 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:47.540472 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:47.540494 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:47.540502 1693723 round_trippers.go:580]     Audit-Id: da7a18ae-bfd9-48cc-9ad9-c0bb1220ce53
	I0115 11:14:47.540509 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:47.540515 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:47.540521 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:47.540528 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:47.540534 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:47 GMT
	I0115 11:14:47.540783 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:47.541166 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:48.037978 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:48.038005 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:48.038019 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:48.038027 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:48.040659 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:48.040688 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:48.040699 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:48 GMT
	I0115 11:14:48.040705 1693723 round_trippers.go:580]     Audit-Id: 70203558-1cbf-4a08-a393-6e11c9ca4f38
	I0115 11:14:48.040719 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:48.040726 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:48.040736 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:48.040742 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:48.040883 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:48.538031 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:48.538054 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:48.538065 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:48.538073 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:48.540638 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:48.540657 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:48.540665 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:48.540672 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:48.540678 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:48.540684 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:48.540691 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:48 GMT
	I0115 11:14:48.540697 1693723 round_trippers.go:580]     Audit-Id: 65bcb797-29e6-4b60-9f08-467b872251ed
	I0115 11:14:48.540818 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:49.037925 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:49.037953 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:49.037964 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:49.037971 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:49.040608 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:49.040632 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:49.040640 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:49.040648 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:49 GMT
	I0115 11:14:49.040654 1693723 round_trippers.go:580]     Audit-Id: 8a67f57c-4ff0-4ac8-98fb-4b6dc56c11d8
	I0115 11:14:49.040660 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:49.040667 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:49.040673 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:49.040836 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:49.538173 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:49.538201 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:49.538212 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:49.538219 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:49.540862 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:49.540888 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:49.540898 1693723 round_trippers.go:580]     Audit-Id: 5791d705-7e03-4a79-ba0f-17c8448531a2
	I0115 11:14:49.540913 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:49.540925 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:49.540935 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:49.540945 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:49.540954 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:49 GMT
	I0115 11:14:49.541057 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:49.541451 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:50.037308 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:50.037332 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:50.037343 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:50.037351 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:50.040080 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:50.040109 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:50.040118 1693723 round_trippers.go:580]     Audit-Id: 31c8b222-b13a-4ac8-8e05-1c1e625ea35d
	I0115 11:14:50.040125 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:50.040132 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:50.040138 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:50.040145 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:50.040152 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:50 GMT
	I0115 11:14:50.040290 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:50.538107 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:50.538138 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:50.538148 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:50.538156 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:50.540795 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:50.540877 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:50.540899 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:50.540927 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:50 GMT
	I0115 11:14:50.540936 1693723 round_trippers.go:580]     Audit-Id: ca6a28c6-8c11-4751-bf94-9311f2a8a300
	I0115 11:14:50.540943 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:50.540953 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:50.540960 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:50.541073 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:51.037470 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:51.037493 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:51.037503 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:51.037510 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:51.040221 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:51.040243 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:51.040252 1693723 round_trippers.go:580]     Audit-Id: 9fc5206d-ea95-42ef-bffe-00946d1ca63d
	I0115 11:14:51.040258 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:51.040264 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:51.040270 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:51.040276 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:51.040283 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:51 GMT
	I0115 11:14:51.040412 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:51.537447 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:51.537473 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:51.537483 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:51.537491 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:51.540023 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:51.540049 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:51.540058 1693723 round_trippers.go:580]     Audit-Id: 7f8949dd-aad8-4224-b484-3c74eca9124c
	I0115 11:14:51.540065 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:51.540071 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:51.540077 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:51.540083 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:51.540090 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:51 GMT
	I0115 11:14:51.540254 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:52.037444 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:52.037468 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:52.037478 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:52.037488 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:52.039968 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:52.039992 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:52.040001 1693723 round_trippers.go:580]     Audit-Id: a660deb1-ac64-42e6-b8cc-8c53f07ba9d6
	I0115 11:14:52.040007 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:52.040014 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:52.040020 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:52.040027 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:52.040036 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:52 GMT
	I0115 11:14:52.040172 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:52.040598 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:52.537454 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:52.537475 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:52.537486 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:52.537493 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:52.540159 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:52.540195 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:52.540204 1693723 round_trippers.go:580]     Audit-Id: 95ccecc4-72cd-4b3d-b52a-6def9fab46c6
	I0115 11:14:52.540214 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:52.540221 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:52.540231 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:52.540243 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:52.540257 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:52 GMT
	I0115 11:14:52.540373 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:53.038016 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:53.038039 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:53.038049 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:53.038056 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:53.040615 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:53.040653 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:53.040662 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:53.040668 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:53.040674 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:53.040682 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:53 GMT
	I0115 11:14:53.040688 1693723 round_trippers.go:580]     Audit-Id: 7e3026f6-3010-44af-95ac-3f0d27a30c20
	I0115 11:14:53.040695 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:53.040869 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:53.537220 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:53.537245 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:53.537255 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:53.537263 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:53.539917 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:53.539982 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:53.539992 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:53.539999 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:53.540006 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:53 GMT
	I0115 11:14:53.540012 1693723 round_trippers.go:580]     Audit-Id: 91877a23-5773-41f0-b892-6b36800203e5
	I0115 11:14:53.540023 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:53.540036 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:53.540143 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:54.037643 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:54.037669 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:54.037680 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:54.037687 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:54.040319 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:54.040348 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:54.040358 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:54.040366 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:54.040372 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:54.040379 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:54 GMT
	I0115 11:14:54.040385 1693723 round_trippers.go:580]     Audit-Id: 91e07272-18b5-4c21-a6a5-b0f386536ed8
	I0115 11:14:54.040391 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:54.040506 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:54.040901 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:54.538023 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:54.538049 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:54.538059 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:54.538067 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:54.540575 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:54.540597 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:54.540606 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:54.540612 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:54.540619 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:54 GMT
	I0115 11:14:54.540625 1693723 round_trippers.go:580]     Audit-Id: 983f1b67-ad01-4112-9bb5-56a4a21de2aa
	I0115 11:14:54.540631 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:54.540637 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:54.540731 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:55.037292 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:55.037322 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:55.037333 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:55.037345 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:55.039985 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:55.040006 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:55.040015 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:55.040022 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:55 GMT
	I0115 11:14:55.040028 1693723 round_trippers.go:580]     Audit-Id: fed853a9-829a-4f81-8a15-d4dc265d3cc5
	I0115 11:14:55.040049 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:55.040056 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:55.040063 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:55.040175 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:55.537338 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:55.537363 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:55.537373 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:55.537380 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:55.540018 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:55.540051 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:55.540059 1693723 round_trippers.go:580]     Audit-Id: 84bb222e-372c-42ff-ae0a-76c5577b88d9
	I0115 11:14:55.540066 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:55.540072 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:55.540078 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:55.540085 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:55.540092 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:55 GMT
	I0115 11:14:55.540212 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:56.037355 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:56.037404 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:56.037418 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:56.037430 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:56.040434 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:56.040459 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:56.040475 1693723 round_trippers.go:580]     Audit-Id: 524ac421-b1d6-44ae-bf47-1a21a2fe20da
	I0115 11:14:56.040482 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:56.040491 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:56.040501 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:56.040508 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:56.040517 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:56 GMT
	I0115 11:14:56.040692 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:56.041142 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:56.537900 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:56.537928 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:56.537939 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:56.537947 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:56.540484 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:56.540507 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:56.540516 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:56.540532 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:56.540538 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:56.540547 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:56 GMT
	I0115 11:14:56.540558 1693723 round_trippers.go:580]     Audit-Id: cf255029-1971-4b45-85a8-9b9ad5d99f7e
	I0115 11:14:56.540565 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:56.540696 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:57.037822 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:57.037843 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:57.037854 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:57.037861 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:57.040434 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:57.040463 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:57.040472 1693723 round_trippers.go:580]     Audit-Id: 05a6ab8c-b4c1-4859-a96b-b96c5dc60c9e
	I0115 11:14:57.040479 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:57.040485 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:57.040491 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:57.040497 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:57.040513 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:57 GMT
	I0115 11:14:57.040634 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:57.537836 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:57.537863 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:57.537872 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:57.537880 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:57.540403 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:57.540427 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:57.540436 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:57.540442 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:57 GMT
	I0115 11:14:57.540448 1693723 round_trippers.go:580]     Audit-Id: 0fb251f7-fa42-4e53-8a70-b3b76b7a1ccb
	I0115 11:14:57.540454 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:57.540460 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:57.540467 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:57.540590 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:58.037257 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:58.037293 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:58.037304 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:58.037311 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:58.039881 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:58.039906 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:58.039915 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:58.039922 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:58.039928 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:58 GMT
	I0115 11:14:58.039934 1693723 round_trippers.go:580]     Audit-Id: 3cf285cf-5709-4015-9314-172f5b234745
	I0115 11:14:58.039941 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:58.039947 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:58.040058 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:58.538078 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:58.538105 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:58.538116 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:58.538123 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:58.540568 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:58.540592 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:58.540600 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:58 GMT
	I0115 11:14:58.540607 1693723 round_trippers.go:580]     Audit-Id: b0d4ac76-1178-4bba-bab7-268c71d0ed2a
	I0115 11:14:58.540613 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:58.540619 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:58.540625 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:58.540633 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:58.540752 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:58.541129 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:14:59.037856 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:59.037885 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:59.037896 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:59.037903 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:59.040535 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:59.040558 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:59.040566 1693723 round_trippers.go:580]     Audit-Id: 597b7cc7-5a94-4ecd-bfa0-cb002f4fa839
	I0115 11:14:59.040572 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:59.040580 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:59.040587 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:59.040593 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:59.040599 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:59 GMT
	I0115 11:14:59.040728 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:14:59.537902 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:14:59.537924 1693723 round_trippers.go:469] Request Headers:
	I0115 11:14:59.537934 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:14:59.537942 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:14:59.540790 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:14:59.540816 1693723 round_trippers.go:577] Response Headers:
	I0115 11:14:59.540824 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:14:59 GMT
	I0115 11:14:59.540831 1693723 round_trippers.go:580]     Audit-Id: c5a3785e-7a7f-48cc-af06-c515e7227c86
	I0115 11:14:59.540837 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:14:59.540843 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:14:59.540849 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:14:59.540856 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:14:59.540992 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:15:00.037262 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:15:00.037297 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:00.037307 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:00.037315 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:00.051254 1693723 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0115 11:15:00.051278 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:00.051288 1693723 round_trippers.go:580]     Audit-Id: 4ecb9371-cf98-4ed2-a64b-49e40c42f63e
	I0115 11:15:00.051295 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:00.051302 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:00.051309 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:00.051315 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:00.051322 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:00 GMT
	I0115 11:15:00.051438 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:15:00.537460 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:15:00.537486 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:00.537497 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:00.537504 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:00.540071 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:00.540097 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:00.540107 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:00.540114 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:00.540120 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:00.540128 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:00.540135 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:00 GMT
	I0115 11:15:00.540142 1693723 round_trippers.go:580]     Audit-Id: 8d21a09a-efd1-4e30-926a-385f4ff66bd8
	I0115 11:15:00.540262 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:15:01.038015 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:15:01.038041 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:01.038051 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:01.038058 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:01.040880 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:01.040909 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:01.040919 1693723 round_trippers.go:580]     Audit-Id: 39f72bbc-2297-4c66-a6d0-0af7e6dc4607
	I0115 11:15:01.040927 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:01.040933 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:01.040992 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:01.041003 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:01.041010 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:01 GMT
	I0115 11:15:01.041255 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:15:01.041654 1693723 node_ready.go:58] node "multinode-279658-m02" has status "Ready":"False"
	I0115 11:15:01.537951 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:15:01.537975 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:01.537984 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:01.537992 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:01.540739 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:01.540761 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:01.540769 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:01.540775 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:01.540782 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:01 GMT
	I0115 11:15:01.540788 1693723 round_trippers.go:580]     Audit-Id: 95bccc55-dd22-4cce-a573-145a27bdd282
	I0115 11:15:01.540794 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:01.540800 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:01.540959 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"525","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0115 11:15:02.038164 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:15:02.038194 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.038210 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.038217 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.041178 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.041205 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.041214 1693723 round_trippers.go:580]     Audit-Id: b0b98e31-48db-428c-a64e-dd40fad4addd
	I0115 11:15:02.041221 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.041227 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.041233 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.041239 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.041246 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.041373 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"548","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I0115 11:15:02.041750 1693723 node_ready.go:49] node "multinode-279658-m02" has status "Ready":"True"
	I0115 11:15:02.041772 1693723 node_ready.go:38] duration metric: took 30.004757008s waiting for node "multinode-279658-m02" to be "Ready" ...
	I0115 11:15:02.041784 1693723 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:15:02.041844 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0115 11:15:02.041854 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.041862 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.041869 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.045571 1693723 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0115 11:15:02.045598 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.045607 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.045614 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.045621 1693723 round_trippers.go:580]     Audit-Id: 73d53ae9-0d8c-4a11-8594-f4c33c0c7789
	I0115 11:15:02.045627 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.045634 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.045643 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.046246 1693723 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"548"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rmgns","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20120dfd-708f-4d25-a64a-d790f55c3e56","resourceVersion":"441","creationTimestamp":"2024-01-15T11:13:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3fe011c-57d0-4c2b-b5b4-50a12422361f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fe011c-57d0-4c2b-b5b4-50a12422361f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0115 11:15:02.049155 1693723 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rmgns" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.049250 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rmgns
	I0115 11:15:02.049264 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.049274 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.049281 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.051852 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.051912 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.051935 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.051958 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.051995 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.052011 1693723 round_trippers.go:580]     Audit-Id: c62af10f-d95f-4ce3-bc3f-4a8afc1b0557
	I0115 11:15:02.052018 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.052025 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.052134 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rmgns","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"20120dfd-708f-4d25-a64a-d790f55c3e56","resourceVersion":"441","creationTimestamp":"2024-01-15T11:13:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3fe011c-57d0-4c2b-b5b4-50a12422361f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3fe011c-57d0-4c2b-b5b4-50a12422361f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0115 11:15:02.052671 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:15:02.052689 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.052700 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.052707 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.055050 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.055075 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.055084 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.055091 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.055105 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.055111 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.055118 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.055127 1693723 round_trippers.go:580]     Audit-Id: 7500ab95-c667-4959-b91a-5a53f5fe9a1a
	I0115 11:15:02.055482 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:15:02.055851 1693723 pod_ready.go:92] pod "coredns-5dd5756b68-rmgns" in "kube-system" namespace has status "Ready":"True"
	I0115 11:15:02.055864 1693723 pod_ready.go:81] duration metric: took 6.678363ms waiting for pod "coredns-5dd5756b68-rmgns" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.055874 1693723 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.055938 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-279658
	I0115 11:15:02.055943 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.055950 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.055957 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.058240 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.058263 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.058272 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.058305 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.058313 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.058321 1693723 round_trippers.go:580]     Audit-Id: 035ef020-0fee-48a7-80cd-2f5302f9306d
	I0115 11:15:02.058327 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.058333 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.058440 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-279658","namespace":"kube-system","uid":"9aff8988-2d38-4d15-98cd-c3a9fa9bd280","resourceVersion":"325","creationTimestamp":"2024-01-15T11:13:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f80ca0c251f41900e39544fa906af512","kubernetes.io/config.mirror":"f80ca0c251f41900e39544fa906af512","kubernetes.io/config.seen":"2024-01-15T11:13:29.009107303Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0115 11:15:02.058927 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:15:02.058936 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.058944 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.058951 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.061228 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.061249 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.061257 1693723 round_trippers.go:580]     Audit-Id: ceade86f-c49b-4415-9546-e1f229b7bd3f
	I0115 11:15:02.061263 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.061270 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.061276 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.061288 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.061294 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.061470 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:15:02.061877 1693723 pod_ready.go:92] pod "etcd-multinode-279658" in "kube-system" namespace has status "Ready":"True"
	I0115 11:15:02.061897 1693723 pod_ready.go:81] duration metric: took 6.015577ms waiting for pod "etcd-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.061925 1693723 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.061993 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-279658
	I0115 11:15:02.062004 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.062013 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.062024 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.066404 1693723 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0115 11:15:02.066429 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.066439 1693723 round_trippers.go:580]     Audit-Id: 0ebdac01-9e3b-4f1f-9506-e0b4b4d9325f
	I0115 11:15:02.066445 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.066451 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.066457 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.066464 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.066470 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.066604 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-279658","namespace":"kube-system","uid":"693a03b4-3bdf-4de1-87cb-f4b6b524a7cf","resourceVersion":"321","creationTimestamp":"2024-01-15T11:13:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e792308696bb4be3fddd132c9ec0f17b","kubernetes.io/config.mirror":"e792308696bb4be3fddd132c9ec0f17b","kubernetes.io/config.seen":"2024-01-15T11:13:29.009099369Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0115 11:15:02.067121 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:15:02.067137 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.067145 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.067152 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.069566 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.069591 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.069600 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.069606 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.069612 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.069618 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.069625 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.069631 1693723 round_trippers.go:580]     Audit-Id: c041e3a2-280b-4ddb-9187-43df998936d6
	I0115 11:15:02.069732 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:15:02.070144 1693723 pod_ready.go:92] pod "kube-apiserver-multinode-279658" in "kube-system" namespace has status "Ready":"True"
	I0115 11:15:02.070164 1693723 pod_ready.go:81] duration metric: took 8.227673ms waiting for pod "kube-apiserver-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.070175 1693723 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.070238 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-279658
	I0115 11:15:02.070248 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.070256 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.070263 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.072705 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.072762 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.072784 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.072806 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.072840 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.072863 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.072882 1693723 round_trippers.go:580]     Audit-Id: aceadb89-40ff-4298-81a8-d20d3a9d608a
	I0115 11:15:02.072903 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.073071 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-279658","namespace":"kube-system","uid":"60d65709-5636-408d-8e80-491f1a4dfa1b","resourceVersion":"319","creationTimestamp":"2024-01-15T11:13:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"daeb02f972339046c9bf6a96a2b71156","kubernetes.io/config.mirror":"daeb02f972339046c9bf6a96a2b71156","kubernetes.io/config.seen":"2024-01-15T11:13:21.435611758Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0115 11:15:02.073624 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:15:02.073642 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.073651 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.073658 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.076069 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.076093 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.076101 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.076107 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.076114 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.076120 1693723 round_trippers.go:580]     Audit-Id: 6e26c2fe-ac2d-4721-9db4-22fa166a02e0
	I0115 11:15:02.076127 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.076137 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.076450 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:15:02.076829 1693723 pod_ready.go:92] pod "kube-controller-manager-multinode-279658" in "kube-system" namespace has status "Ready":"True"
	I0115 11:15:02.076850 1693723 pod_ready.go:81] duration metric: took 6.663955ms waiting for pod "kube-controller-manager-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.076862 1693723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ppwh7" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.239251 1693723 request.go:629] Waited for 162.304184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ppwh7
	I0115 11:15:02.239337 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ppwh7
	I0115 11:15:02.239349 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.239358 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.239366 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.242041 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.242067 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.242076 1693723 round_trippers.go:580]     Audit-Id: aadde0a7-0484-4d58-a843-5b81062a0542
	I0115 11:15:02.242083 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.242089 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.242095 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.242106 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.242115 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.242465 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ppwh7","generateName":"kube-proxy-","namespace":"kube-system","uid":"daefc035-a953-43dd-8cf6-6e099f8dd024","resourceVersion":"510","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a9ab3f8d-ea08-4f6a-92bb-976b14e41e6f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9ab3f8d-ea08-4f6a-92bb-976b14e41e6f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0115 11:15:02.438427 1693723 request.go:629] Waited for 195.453673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:15:02.438504 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658-m02
	I0115 11:15:02.438517 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.438526 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.438534 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.441168 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.441193 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.441201 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.441208 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.441239 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.441253 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.441260 1693723 round_trippers.go:580]     Audit-Id: ebb7f500-cec0-4f36-b436-866d89665533
	I0115 11:15:02.441266 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.441390 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658-m02","uid":"c244a70f-1456-4002-a17f-08469eca48ee","resourceVersion":"548","creationTimestamp":"2024-01-15T11:14:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_15T11_14_30_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I0115 11:15:02.441796 1693723 pod_ready.go:92] pod "kube-proxy-ppwh7" in "kube-system" namespace has status "Ready":"True"
	I0115 11:15:02.441815 1693723 pod_ready.go:81] duration metric: took 364.941971ms waiting for pod "kube-proxy-ppwh7" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.441826 1693723 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tdtxr" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.638599 1693723 request.go:629] Waited for 196.700165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdtxr
	I0115 11:15:02.638678 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdtxr
	I0115 11:15:02.638687 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.638696 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.638703 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.641181 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.641219 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.641230 1693723 round_trippers.go:580]     Audit-Id: 410b2ebd-2c74-4328-876c-b88d872768dd
	I0115 11:15:02.641240 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.641246 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.641258 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.641264 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.641271 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.641389 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tdtxr","generateName":"kube-proxy-","namespace":"kube-system","uid":"fd50a58b-d9c8-42ae-8a1a-d4716cedb568","resourceVersion":"391","creationTimestamp":"2024-01-15T11:13:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a9ab3f8d-ea08-4f6a-92bb-976b14e41e6f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9ab3f8d-ea08-4f6a-92bb-976b14e41e6f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0115 11:15:02.839178 1693723 request.go:629] Waited for 197.321613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:15:02.839239 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:15:02.839249 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:02.839264 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:02.839275 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:02.841789 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:02.841814 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:02.841823 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:02.841829 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:02.841836 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:02 GMT
	I0115 11:15:02.841842 1693723 round_trippers.go:580]     Audit-Id: 5b74fd54-af5c-4130-b143-19ed70341cde
	I0115 11:15:02.841848 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:02.841862 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:02.841981 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:15:02.842390 1693723 pod_ready.go:92] pod "kube-proxy-tdtxr" in "kube-system" namespace has status "Ready":"True"
	I0115 11:15:02.842409 1693723 pod_ready.go:81] duration metric: took 400.572522ms waiting for pod "kube-proxy-tdtxr" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:02.842421 1693723 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:03.039211 1693723 request.go:629] Waited for 196.698778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-279658
	I0115 11:15:03.039295 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-279658
	I0115 11:15:03.039301 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:03.039316 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:03.039325 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:03.042040 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:03.042069 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:03.042086 1693723 round_trippers.go:580]     Audit-Id: 4d803538-f242-4f7c-b67b-ee19a1909a0f
	I0115 11:15:03.042093 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:03.042101 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:03.042108 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:03.042114 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:03.042123 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:03 GMT
	I0115 11:15:03.042492 1693723 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-279658","namespace":"kube-system","uid":"bfd1e34e-c84e-4102-84a5-c1c5e50447d4","resourceVersion":"320","creationTimestamp":"2024-01-15T11:13:27Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0a1ae7a442119331fb27f0b43446d749","kubernetes.io/config.mirror":"0a1ae7a442119331fb27f0b43446d749","kubernetes.io/config.seen":"2024-01-15T11:13:21.435601494Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-15T11:13:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0115 11:15:03.239070 1693723 request.go:629] Waited for 196.152405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:15:03.239148 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-279658
	I0115 11:15:03.239175 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:03.239189 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:03.239219 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:03.241690 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:03.241710 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:03.241718 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:03.241725 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:03 GMT
	I0115 11:15:03.241732 1693723 round_trippers.go:580]     Audit-Id: 33e2c875-bb86-4637-9f75-f8b44175515a
	I0115 11:15:03.241738 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:03.241744 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:03.241757 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:03.241853 1693723 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-15T11:13:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0115 11:15:03.242237 1693723 pod_ready.go:92] pod "kube-scheduler-multinode-279658" in "kube-system" namespace has status "Ready":"True"
	I0115 11:15:03.242256 1693723 pod_ready.go:81] duration metric: took 399.827925ms waiting for pod "kube-scheduler-multinode-279658" in "kube-system" namespace to be "Ready" ...
	I0115 11:15:03.242268 1693723 pod_ready.go:38] duration metric: took 1.20047465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 11:15:03.242306 1693723 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 11:15:03.242364 1693723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:15:03.255743 1693723 system_svc.go:56] duration metric: took 13.427271ms WaitForService to wait for kubelet.
	I0115 11:15:03.255767 1693723 kubeadm.go:581] duration metric: took 31.240268969s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 11:15:03.255788 1693723 node_conditions.go:102] verifying NodePressure condition ...
	I0115 11:15:03.439019 1693723 request.go:629] Waited for 183.161356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0115 11:15:03.439107 1693723 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0115 11:15:03.439118 1693723 round_trippers.go:469] Request Headers:
	I0115 11:15:03.439133 1693723 round_trippers.go:473]     Accept: application/json, */*
	I0115 11:15:03.439141 1693723 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0115 11:15:03.441987 1693723 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0115 11:15:03.442100 1693723 round_trippers.go:577] Response Headers:
	I0115 11:15:03.442122 1693723 round_trippers.go:580]     Audit-Id: 13595015-141a-46c6-b6d9-fc9a37450afb
	I0115 11:15:03.442131 1693723 round_trippers.go:580]     Cache-Control: no-cache, private
	I0115 11:15:03.442150 1693723 round_trippers.go:580]     Content-Type: application/json
	I0115 11:15:03.442168 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9c8a04eb-7239-4db9-a700-79a6ab202b73
	I0115 11:15:03.442175 1693723 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e019ce36-9573-437b-a996-834056f78fb1
	I0115 11:15:03.442190 1693723 round_trippers.go:580]     Date: Mon, 15 Jan 2024 11:15:03 GMT
	I0115 11:15:03.442540 1693723 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"549"},"items":[{"metadata":{"name":"multinode-279658","uid":"58b3202b-fea9-4775-8a16-7ea46adc9021","resourceVersion":"415","creationTimestamp":"2024-01-15T11:13:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-279658","kubernetes.io/os":"linux","minikube.k8s.io/commit":"49acfca761ba3cce5d2bedb7b4a0191c7f924d23","minikube.k8s.io/name":"multinode-279658","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_15T11_13_30_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 13004 chars]
	I0115 11:15:03.443308 1693723 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0115 11:15:03.443335 1693723 node_conditions.go:123] node cpu capacity is 2
	I0115 11:15:03.443345 1693723 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0115 11:15:03.443350 1693723 node_conditions.go:123] node cpu capacity is 2
	I0115 11:15:03.443355 1693723 node_conditions.go:105] duration metric: took 187.561605ms to run NodePressure ...
	I0115 11:15:03.443380 1693723 start.go:228] waiting for startup goroutines ...
	I0115 11:15:03.443416 1693723 start.go:242] writing updated cluster config ...
	I0115 11:15:03.443783 1693723 ssh_runner.go:195] Run: rm -f paused
	I0115 11:15:03.506811 1693723 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 11:15:03.509609 1693723 out.go:177] * Done! kubectl is now configured to use "multinode-279658" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 15 11:14:36 multinode-279658 crio[899]: time="2024-01-15 11:14:36.624167010Z" level=info msg="Got pod network &{Name:coredns-5dd5756b68-jqj8x Namespace:kube-system ID:52ca71e1c51d80a8c823bc8fbbfa089ba5230a3f155339364b5d473e096fcbe8 UID:0bb83c0f-1bf1-4ade-94f6-8e46770f3371 NetNS:/var/run/netns/f2fd34a0-6b71-4bf6-bf97-dd7712a21997 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 15 11:14:36 multinode-279658 crio[899]: time="2024-01-15 11:14:36.624339764Z" level=info msg="Deleting pod kube-system_coredns-5dd5756b68-jqj8x from CNI network \"kindnet\" (type=ptp)"
	Jan 15 11:14:36 multinode-279658 crio[899]: time="2024-01-15 11:14:36.655891148Z" level=info msg="Stopped pod sandbox: 52ca71e1c51d80a8c823bc8fbbfa089ba5230a3f155339364b5d473e096fcbe8" id=580cb9b8-149f-425b-9fc8-18b78bf93ee4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 15 11:14:37 multinode-279658 crio[899]: time="2024-01-15 11:14:37.231806287Z" level=info msg="Removing container: 9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12" id=5cbe180c-2d1b-4e6e-a2a9-86455bf23e58 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 11:14:37 multinode-279658 crio[899]: time="2024-01-15 11:14:37.257286661Z" level=info msg="Removed container 9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12: kube-system/coredns-5dd5756b68-jqj8x/coredns" id=5cbe180c-2d1b-4e6e-a2a9-86455bf23e58 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.756869431Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-nn8t2/POD" id=7278c81c-bd7b-4d5b-8b42-684a4f1ed714 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.756926291Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.778775552Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-nn8t2 Namespace:default ID:fe2601e7b46771dd8ad3af0c277841ac3d8f71c3f924d22edb48dc91ec031f72 UID:7e3feeaa-06ce-4e40-8c0a-3ba5d84a402b NetNS:/var/run/netns/d0641e23-cf8e-4ae5-b026-adc94db255b4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.778813812Z" level=info msg="Adding pod default_busybox-5bc68d56bd-nn8t2 to CNI network \"kindnet\" (type=ptp)"
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.790626117Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-nn8t2 Namespace:default ID:fe2601e7b46771dd8ad3af0c277841ac3d8f71c3f924d22edb48dc91ec031f72 UID:7e3feeaa-06ce-4e40-8c0a-3ba5d84a402b NetNS:/var/run/netns/d0641e23-cf8e-4ae5-b026-adc94db255b4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.790796746Z" level=info msg="Checking pod default_busybox-5bc68d56bd-nn8t2 for CNI network kindnet (type=ptp)"
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.793234051Z" level=info msg="Ran pod sandbox fe2601e7b46771dd8ad3af0c277841ac3d8f71c3f924d22edb48dc91ec031f72 with infra container: default/busybox-5bc68d56bd-nn8t2/POD" id=7278c81c-bd7b-4d5b-8b42-684a4f1ed714 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.797486678Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=d5a5cda2-7513-4607-93ce-3453e7c3c2ce name=/runtime.v1.ImageService/ImageStatus
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.797702845Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=d5a5cda2-7513-4607-93ce-3453e7c3c2ce name=/runtime.v1.ImageService/ImageStatus
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.798816080Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=f8dbcbd2-ec1a-4dc5-ad70-2a0f68621572 name=/runtime.v1.ImageService/PullImage
	Jan 15 11:15:04 multinode-279658 crio[899]: time="2024-01-15 11:15:04.800377723Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 15 11:15:05 multinode-279658 crio[899]: time="2024-01-15 11:15:05.429354666Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 15 11:15:06 multinode-279658 crio[899]: time="2024-01-15 11:15:06.651628068Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=f8dbcbd2-ec1a-4dc5-ad70-2a0f68621572 name=/runtime.v1.ImageService/PullImage
	Jan 15 11:15:06 multinode-279658 crio[899]: time="2024-01-15 11:15:06.652661905Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=70713c07-4c65-4c1a-9f98-04c2c1da2fe0 name=/runtime.v1.ImageService/ImageStatus
	Jan 15 11:15:06 multinode-279658 crio[899]: time="2024-01-15 11:15:06.653282664Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=70713c07-4c65-4c1a-9f98-04c2c1da2fe0 name=/runtime.v1.ImageService/ImageStatus
	Jan 15 11:15:06 multinode-279658 crio[899]: time="2024-01-15 11:15:06.655568276Z" level=info msg="Creating container: default/busybox-5bc68d56bd-nn8t2/busybox" id=e55f36b0-28d7-4921-905a-5523d2036a7b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 15 11:15:06 multinode-279658 crio[899]: time="2024-01-15 11:15:06.655660918Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 15 11:15:06 multinode-279658 crio[899]: time="2024-01-15 11:15:06.727443983Z" level=info msg="Created container eb60eab15b0de8a241d6e234a96557aa6517c9b5c034c867cff4d5c70d2ce8f7: default/busybox-5bc68d56bd-nn8t2/busybox" id=e55f36b0-28d7-4921-905a-5523d2036a7b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 15 11:15:06 multinode-279658 crio[899]: time="2024-01-15 11:15:06.728276701Z" level=info msg="Starting container: eb60eab15b0de8a241d6e234a96557aa6517c9b5c034c867cff4d5c70d2ce8f7" id=3d91dc2e-c9e0-431b-a53a-2ee58b9c12f3 name=/runtime.v1.RuntimeService/StartContainer
	Jan 15 11:15:06 multinode-279658 crio[899]: time="2024-01-15 11:15:06.738973678Z" level=info msg="Started container" PID=2185 containerID=eb60eab15b0de8a241d6e234a96557aa6517c9b5c034c867cff4d5c70d2ce8f7 description=default/busybox-5bc68d56bd-nn8t2/busybox id=3d91dc2e-c9e0-431b-a53a-2ee58b9c12f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe2601e7b46771dd8ad3af0c277841ac3d8f71c3f924d22edb48dc91ec031f72
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	eb60eab15b0de       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   fe2601e7b4677       busybox-5bc68d56bd-nn8t2
	06995632fa036       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      57 seconds ago       Running             coredns                   0                   b80cd9a6e0138       coredns-5dd5756b68-rmgns
	6224c3b7fb8eb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      57 seconds ago       Running             storage-provisioner       0                   d695c40eda77e       storage-provisioner
	315aa5cd992aa       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   1a5e7ae7ff00f       kindnet-ngs6h
	3ca696a001092       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      About a minute ago   Running             kube-proxy                0                   7c3365eb46cf1       kube-proxy-tdtxr
	224c450ce563b       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   213679840bb35       etcd-multinode-279658
	84d066fd93439       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   5a36136589bbc       kube-apiserver-multinode-279658
	ae348e295f4ca       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   f01e8277dd201       kube-controller-manager-multinode-279658
	2a760288fcafd       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   610bfd03f8eb1       kube-scheduler-multinode-279658
	
	
	==> coredns [06995632fa0365dbb8ebdd0b77e43d81c9d9fa18e17c9b1de5e00845605d9fbb] <==
	[INFO] 10.244.1.2:53564 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108124s
	[INFO] 10.244.0.4:47425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001124s
	[INFO] 10.244.0.4:49818 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001006112s
	[INFO] 10.244.0.4:60026 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005307s
	[INFO] 10.244.0.4:38799 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045365s
	[INFO] 10.244.0.4:47328 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000820394s
	[INFO] 10.244.0.4:45647 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000045258s
	[INFO] 10.244.0.4:45823 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042141s
	[INFO] 10.244.0.4:48119 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057705s
	[INFO] 10.244.1.2:54984 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169678s
	[INFO] 10.244.1.2:54222 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067125s
	[INFO] 10.244.1.2:42126 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082205s
	[INFO] 10.244.1.2:59929 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006395s
	[INFO] 10.244.0.4:35133 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010225s
	[INFO] 10.244.0.4:32868 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069685s
	[INFO] 10.244.0.4:45640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074812s
	[INFO] 10.244.0.4:43236 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063605s
	[INFO] 10.244.1.2:36881 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116215s
	[INFO] 10.244.1.2:43208 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112334s
	[INFO] 10.244.1.2:52336 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095472s
	[INFO] 10.244.1.2:53145 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092125s
	[INFO] 10.244.0.4:50175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148509s
	[INFO] 10.244.0.4:54407 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000103119s
	[INFO] 10.244.0.4:41284 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086693s
	[INFO] 10.244.0.4:45098 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087661s
	
	
	==> describe nodes <==
	Name:               multinode-279658
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-279658
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=multinode-279658
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T11_13_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 11:13:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-279658
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 11:15:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 11:14:13 +0000   Mon, 15 Jan 2024 11:13:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 11:14:13 +0000   Mon, 15 Jan 2024 11:13:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 11:14:13 +0000   Mon, 15 Jan 2024 11:13:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 11:14:13 +0000   Mon, 15 Jan 2024 11:14:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-279658
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e4affd1b0c743318009e5450e82152e
	  System UUID:                f5c39d0f-469d-4e38-9218-d90dc367e517
	  Boot ID:                    2320f45f-1c30-479b-83e7-a1d3daee01d1
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-nn8t2                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-rmgns                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     89s
	  kube-system                 etcd-multinode-279658                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kindnet-ngs6h                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      90s
	  kube-system                 kube-apiserver-multinode-279658             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-controller-manager-multinode-279658    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-tdtxr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-multinode-279658             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  110s (x8 over 110s)  kubelet          Node multinode-279658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s (x8 over 110s)  kubelet          Node multinode-279658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s (x8 over 110s)  kubelet          Node multinode-279658 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node multinode-279658 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node multinode-279658 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node multinode-279658 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s                  node-controller  Node multinode-279658 event: Registered Node multinode-279658 in Controller
	  Normal  NodeReady                58s                  kubelet          Node multinode-279658 status is now: NodeReady
	
	
	Name:               multinode-279658-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-279658-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49acfca761ba3cce5d2bedb7b4a0191c7f924d23
	                    minikube.k8s.io/name=multinode-279658
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_15T11_14_30_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 11:14:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-279658-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 11:15:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 11:15:01 +0000   Mon, 15 Jan 2024 11:14:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 11:15:01 +0000   Mon, 15 Jan 2024 11:14:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 11:15:01 +0000   Mon, 15 Jan 2024 11:14:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 11:15:01 +0000   Mon, 15 Jan 2024 11:15:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-279658-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 3df53f68fab64ee1b88ba069c1ab37ef
	  System UUID:                924ab9e9-6dc8-4f3b-93f5-ed6145582cef
	  Boot ID:                    2320f45f-1c30-479b-83e7-a1d3daee01d1
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-drm6d    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-fm25l               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      41s
	  kube-system                 kube-proxy-ppwh7            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  NodeHasSufficientMemory  41s (x5 over 43s)  kubelet          Node multinode-279658-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x5 over 43s)  kubelet          Node multinode-279658-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x5 over 43s)  kubelet          Node multinode-279658-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node multinode-279658-m02 event: Registered Node multinode-279658-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-279658-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001143] FS-Cache: O-key=[8] '83663b0000000000'
	[  +0.000776] FS-Cache: N-cookie c=00000078 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001066] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=000000002dc74ee5
	[  +0.001134] FS-Cache: N-key=[8] '83663b0000000000'
	[  +0.003126] FS-Cache: Duplicate cookie detected
	[  +0.000775] FS-Cache: O-cookie c=00000072 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001074] FS-Cache: O-cookie d=00000000d00daa15{9p.inode} n=00000000fe92a6ba
	[  +0.001159] FS-Cache: O-key=[8] '83663b0000000000'
	[  +0.000832] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001030] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=00000000739a828d
	[  +0.001132] FS-Cache: N-key=[8] '83663b0000000000'
	[  +3.222739] FS-Cache: Duplicate cookie detected
	[  +0.000781] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001104] FS-Cache: O-cookie d=00000000d00daa15{9p.inode} n=0000000053a68b73
	[  +0.001216] FS-Cache: O-key=[8] '82663b0000000000'
	[  +0.000802] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.001049] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=000000002dc74ee5
	[  +0.001173] FS-Cache: N-key=[8] '82663b0000000000'
	[  +0.302135] FS-Cache: Duplicate cookie detected
	[  +0.000774] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001061] FS-Cache: O-cookie d=00000000d00daa15{9p.inode} n=00000000dec4625c
	[  +0.001206] FS-Cache: O-key=[8] '89663b0000000000'
	[  +0.000768] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.001039] FS-Cache: N-cookie d=00000000d00daa15{9p.inode} n=00000000142868f2
	[  +0.001158] FS-Cache: N-key=[8] '89663b0000000000'
	
	
	==> etcd [224c450ce563b8373bd7d3da141f68240f5546100495699c01ea5835fd2288e1] <==
	{"level":"info","ts":"2024-01-15T11:13:22.298152Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-15T11:13:22.298185Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-15T11:13:22.298194Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-15T11:13:22.298458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-15T11:13:22.298532Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-15T11:13:22.298655Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-15T11:13:22.298681Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-15T11:13:22.666321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-15T11:13:22.666371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-15T11:13:22.666397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-15T11:13:22.66641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-15T11:13:22.666417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-15T11:13:22.666427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-15T11:13:22.666435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-15T11:13:22.674444Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-279658 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T11:13:22.674591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T11:13:22.675582Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T11:13:22.675688Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T11:13:22.67579Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T11:13:22.682681Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T11:13:22.682761Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T11:13:22.682785Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T11:13:22.683189Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-15T11:13:22.683307Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T11:13:22.683354Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:15:12 up  9:57,  0 users,  load average: 1.16, 1.80, 1.80
	Linux multinode-279658 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [315aa5cd992aaa33942b3bd88f3588b11963d013d172e41a3c482ee55c041bbb] <==
	I0115 11:13:42.933776       1 main.go:116] setting mtu 1500 for CNI 
	I0115 11:13:42.933790       1 main.go:146] kindnetd IP family: "ipv4"
	I0115 11:13:42.933800       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0115 11:14:13.242468       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0115 11:14:13.256211       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0115 11:14:13.256243       1 main.go:227] handling current node
	I0115 11:14:23.269878       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0115 11:14:23.269905       1 main.go:227] handling current node
	I0115 11:14:33.281755       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0115 11:14:33.281786       1 main.go:227] handling current node
	I0115 11:14:33.281798       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0115 11:14:33.281804       1 main.go:250] Node multinode-279658-m02 has CIDR [10.244.1.0/24] 
	I0115 11:14:33.281970       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0115 11:14:43.286994       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0115 11:14:43.287020       1 main.go:227] handling current node
	I0115 11:14:43.287031       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0115 11:14:43.287037       1 main.go:250] Node multinode-279658-m02 has CIDR [10.244.1.0/24] 
	I0115 11:14:53.298099       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0115 11:14:53.298129       1 main.go:227] handling current node
	I0115 11:14:53.298140       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0115 11:14:53.298146       1 main.go:250] Node multinode-279658-m02 has CIDR [10.244.1.0/24] 
	I0115 11:15:03.307306       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0115 11:15:03.307335       1 main.go:227] handling current node
	I0115 11:15:03.307346       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0115 11:15:03.307352       1 main.go:250] Node multinode-279658-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [84d066fd93439a19486fb0ebc2853a2b491faefc8795ce2c300608f039e84a0b] <==
	I0115 11:13:26.160952       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0115 11:13:26.162977       1 controller.go:624] quota admission added evaluator for: namespaces
	I0115 11:13:26.164627       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0115 11:13:26.165024       1 shared_informer.go:318] Caches are synced for configmaps
	I0115 11:13:26.165109       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0115 11:13:26.165399       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0115 11:13:26.165574       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0115 11:13:26.166125       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0115 11:13:26.169344       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0115 11:13:26.195124       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0115 11:13:26.868040       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0115 11:13:26.872981       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0115 11:13:26.873003       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0115 11:13:27.461975       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0115 11:13:27.503765       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0115 11:13:27.592797       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0115 11:13:27.607610       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0115 11:13:27.608685       1 controller.go:624] quota admission added evaluator for: endpoints
	I0115 11:13:27.613014       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0115 11:13:28.082524       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0115 11:13:28.923666       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0115 11:13:28.943434       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0115 11:13:28.963534       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0115 11:13:41.855665       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0115 11:13:41.865250       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ae348e295f4ca7331c6894287e718e48fa4e5e89c56a42cc385c40aa03bb9836] <==
	I0115 11:14:30.351084       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ppwh7"
	I0115 11:14:30.351718       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fm25l"
	I0115 11:14:31.539176       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0115 11:14:31.564403       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jqj8x"
	I0115 11:14:31.576578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="38.174298ms"
	I0115 11:14:31.593118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.474735ms"
	I0115 11:14:31.593404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="149.871µs"
	I0115 11:14:31.790560       1 event.go:307] "Event occurred" object="multinode-279658-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-279658-m02 event: Registered Node multinode-279658-m02 in Controller"
	I0115 11:14:31.790652       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-279658-m02"
	I0115 11:14:36.673773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.155µs"
	I0115 11:14:37.245938       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="154.974µs"
	I0115 11:14:37.261008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.758µs"
	I0115 11:14:37.264437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.737µs"
	I0115 11:15:01.974647       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-279658-m02"
	I0115 11:15:04.392155       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0115 11:15:04.412851       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-drm6d"
	I0115 11:15:04.433530       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-nn8t2"
	I0115 11:15:04.483299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="90.902295ms"
	I0115 11:15:04.511258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="27.874504ms"
	I0115 11:15:04.511332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.768µs"
	I0115 11:15:06.806106       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-drm6d" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-drm6d"
	I0115 11:15:06.996870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.113766ms"
	I0115 11:15:06.996948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.215µs"
	I0115 11:15:07.306448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.32684ms"
	I0115 11:15:07.306667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="42.649µs"
	
	
	==> kube-proxy [3ca696a001092174e616d9fa4271cd7aeea10714fcf1d60b83e528cefe0d3993] <==
	I0115 11:13:43.069341       1 server_others.go:69] "Using iptables proxy"
	I0115 11:13:43.087524       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0115 11:13:43.137727       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0115 11:13:43.173736       1 server_others.go:152] "Using iptables Proxier"
	I0115 11:13:43.173895       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0115 11:13:43.173955       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0115 11:13:43.174097       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 11:13:43.174437       1 server.go:846] "Version info" version="v1.28.4"
	I0115 11:13:43.176416       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 11:13:43.177375       1 config.go:188] "Starting service config controller"
	I0115 11:13:43.177472       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 11:13:43.177530       1 config.go:97] "Starting endpoint slice config controller"
	I0115 11:13:43.177573       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 11:13:43.179993       1 config.go:315] "Starting node config controller"
	I0115 11:13:43.180078       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 11:13:43.278190       1 shared_informer.go:318] Caches are synced for service config
	I0115 11:13:43.278363       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 11:13:43.280417       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2a760288fcafd01eb318d347edbc6db4cfe333db775cbbf49a561b2d20ccce4e] <==
	E0115 11:13:26.140682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 11:13:26.140687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0115 11:13:26.140720       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 11:13:26.140728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 11:13:26.140729       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0115 11:13:26.140738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0115 11:13:26.140799       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 11:13:26.140812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 11:13:26.948473       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 11:13:26.948618       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 11:13:26.953706       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 11:13:26.953766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 11:13:27.031309       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 11:13:27.031347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0115 11:13:27.077352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 11:13:27.077391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 11:13:27.176100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 11:13:27.176221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 11:13:27.210941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 11:13:27.210974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0115 11:13:27.221812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 11:13:27.221850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0115 11:13:27.284020       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 11:13:27.284053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0115 11:13:28.631128       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 15 11:14:13 multinode-279658 kubelet[1382]: I0115 11:14:13.639905    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76xk5\" (UniqueName: \"kubernetes.io/projected/20120dfd-708f-4d25-a64a-d790f55c3e56-kube-api-access-76xk5\") pod \"coredns-5dd5756b68-rmgns\" (UID: \"20120dfd-708f-4d25-a64a-d790f55c3e56\") " pod="kube-system/coredns-5dd5756b68-rmgns"
	Jan 15 11:14:13 multinode-279658 kubelet[1382]: I0115 11:14:13.639932    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/734e2efb-4fca-4aec-ba3d-882668c1ced5-tmp\") pod \"storage-provisioner\" (UID: \"734e2efb-4fca-4aec-ba3d-882668c1ced5\") " pod="kube-system/storage-provisioner"
	Jan 15 11:14:13 multinode-279658 kubelet[1382]: I0115 11:14:13.639958    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb83c0f-1bf1-4ade-94f6-8e46770f3371-config-volume\") pod \"coredns-5dd5756b68-jqj8x\" (UID: \"0bb83c0f-1bf1-4ade-94f6-8e46770f3371\") " pod="kube-system/coredns-5dd5756b68-jqj8x"
	Jan 15 11:14:13 multinode-279658 kubelet[1382]: I0115 11:14:13.639981    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nbpl\" (UniqueName: \"kubernetes.io/projected/0bb83c0f-1bf1-4ade-94f6-8e46770f3371-kube-api-access-5nbpl\") pod \"coredns-5dd5756b68-jqj8x\" (UID: \"0bb83c0f-1bf1-4ade-94f6-8e46770f3371\") " pod="kube-system/coredns-5dd5756b68-jqj8x"
	Jan 15 11:14:13 multinode-279658 kubelet[1382]: I0115 11:14:13.640003    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrw6\" (UniqueName: \"kubernetes.io/projected/734e2efb-4fca-4aec-ba3d-882668c1ced5-kube-api-access-qnrw6\") pod \"storage-provisioner\" (UID: \"734e2efb-4fca-4aec-ba3d-882668c1ced5\") " pod="kube-system/storage-provisioner"
	Jan 15 11:14:13 multinode-279658 kubelet[1382]: W0115 11:14:13.797236    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5/crio-52ca71e1c51d80a8c823bc8fbbfa089ba5230a3f155339364b5d473e096fcbe8 WatchSource:0}: Error finding container 52ca71e1c51d80a8c823bc8fbbfa089ba5230a3f155339364b5d473e096fcbe8: Status 404 returned error can't find the container with id 52ca71e1c51d80a8c823bc8fbbfa089ba5230a3f155339364b5d473e096fcbe8
	Jan 15 11:14:13 multinode-279658 kubelet[1382]: W0115 11:14:13.859172    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5/crio-b80cd9a6e0138cace071ea64a2d370c29cd5cd887fbb8e4b99b656914b231eb9 WatchSource:0}: Error finding container b80cd9a6e0138cace071ea64a2d370c29cd5cd887fbb8e4b99b656914b231eb9: Status 404 returned error can't find the container with id b80cd9a6e0138cace071ea64a2d370c29cd5cd887fbb8e4b99b656914b231eb9
	Jan 15 11:14:14 multinode-279658 kubelet[1382]: I0115 11:14:14.206666    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.206619801 podCreationTimestamp="2024-01-15 11:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 11:14:14.194432786 +0000 UTC m=+45.305666241" watchObservedRunningTime="2024-01-15 11:14:14.206619801 +0000 UTC m=+45.317853223"
	Jan 15 11:14:14 multinode-279658 kubelet[1382]: I0115 11:14:14.224352    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jqj8x" podStartSLOduration=33.224308635 podCreationTimestamp="2024-01-15 11:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 11:14:14.207087736 +0000 UTC m=+45.318321158" watchObservedRunningTime="2024-01-15 11:14:14.224308635 +0000 UTC m=+45.335542057"
	Jan 15 11:14:31 multinode-279658 kubelet[1382]: I0115 11:14:31.554397    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rmgns" podStartSLOduration=49.554352787 podCreationTimestamp="2024-01-15 11:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-15 11:14:14.224596636 +0000 UTC m=+45.335830066" watchObservedRunningTime="2024-01-15 11:14:31.554352787 +0000 UTC m=+62.665586225"
	Jan 15 11:14:36 multinode-279658 kubelet[1382]: I0115 11:14:36.794111    1382 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb83c0f-1bf1-4ade-94f6-8e46770f3371-config-volume\") pod \"0bb83c0f-1bf1-4ade-94f6-8e46770f3371\" (UID: \"0bb83c0f-1bf1-4ade-94f6-8e46770f3371\") "
	Jan 15 11:14:36 multinode-279658 kubelet[1382]: I0115 11:14:36.794178    1382 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nbpl\" (UniqueName: \"kubernetes.io/projected/0bb83c0f-1bf1-4ade-94f6-8e46770f3371-kube-api-access-5nbpl\") pod \"0bb83c0f-1bf1-4ade-94f6-8e46770f3371\" (UID: \"0bb83c0f-1bf1-4ade-94f6-8e46770f3371\") "
	Jan 15 11:14:36 multinode-279658 kubelet[1382]: I0115 11:14:36.794621    1382 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb83c0f-1bf1-4ade-94f6-8e46770f3371-config-volume" (OuterVolumeSpecName: "config-volume") pod "0bb83c0f-1bf1-4ade-94f6-8e46770f3371" (UID: "0bb83c0f-1bf1-4ade-94f6-8e46770f3371"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jan 15 11:14:36 multinode-279658 kubelet[1382]: I0115 11:14:36.798580    1382 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb83c0f-1bf1-4ade-94f6-8e46770f3371-kube-api-access-5nbpl" (OuterVolumeSpecName: "kube-api-access-5nbpl") pod "0bb83c0f-1bf1-4ade-94f6-8e46770f3371" (UID: "0bb83c0f-1bf1-4ade-94f6-8e46770f3371"). InnerVolumeSpecName "kube-api-access-5nbpl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 11:14:36 multinode-279658 kubelet[1382]: I0115 11:14:36.894864    1382 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5nbpl\" (UniqueName: \"kubernetes.io/projected/0bb83c0f-1bf1-4ade-94f6-8e46770f3371-kube-api-access-5nbpl\") on node \"multinode-279658\" DevicePath \"\""
	Jan 15 11:14:36 multinode-279658 kubelet[1382]: I0115 11:14:36.894914    1382 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bb83c0f-1bf1-4ade-94f6-8e46770f3371-config-volume\") on node \"multinode-279658\" DevicePath \"\""
	Jan 15 11:14:37 multinode-279658 kubelet[1382]: I0115 11:14:37.230515    1382 scope.go:117] "RemoveContainer" containerID="9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12"
	Jan 15 11:14:37 multinode-279658 kubelet[1382]: I0115 11:14:37.257668    1382 scope.go:117] "RemoveContainer" containerID="9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12"
	Jan 15 11:14:37 multinode-279658 kubelet[1382]: E0115 11:14:37.258047    1382 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12\": container with ID starting with 9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12 not found: ID does not exist" containerID="9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12"
	Jan 15 11:14:37 multinode-279658 kubelet[1382]: I0115 11:14:37.258175    1382 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12"} err="failed to get container status \"9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12\": rpc error: code = NotFound desc = could not find container \"9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12\": container with ID starting with 9a6a3ca65826b0476e9311d8ef09139ba2f8730b92ad2785298e744e98d9ee12 not found: ID does not exist"
	Jan 15 11:14:39 multinode-279658 kubelet[1382]: I0115 11:14:39.050398    1382 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0bb83c0f-1bf1-4ade-94f6-8e46770f3371" path="/var/lib/kubelet/pods/0bb83c0f-1bf1-4ade-94f6-8e46770f3371/volumes"
	Jan 15 11:15:04 multinode-279658 kubelet[1382]: I0115 11:15:04.455127    1382 topology_manager.go:215] "Topology Admit Handler" podUID="7e3feeaa-06ce-4e40-8c0a-3ba5d84a402b" podNamespace="default" podName="busybox-5bc68d56bd-nn8t2"
	Jan 15 11:15:04 multinode-279658 kubelet[1382]: E0115 11:15:04.455200    1382 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0bb83c0f-1bf1-4ade-94f6-8e46770f3371" containerName="coredns"
	Jan 15 11:15:04 multinode-279658 kubelet[1382]: I0115 11:15:04.455231    1382 memory_manager.go:346] "RemoveStaleState removing state" podUID="0bb83c0f-1bf1-4ade-94f6-8e46770f3371" containerName="coredns"
	Jan 15 11:15:04 multinode-279658 kubelet[1382]: I0115 11:15:04.559884    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt7xv\" (UniqueName: \"kubernetes.io/projected/7e3feeaa-06ce-4e40-8c0a-3ba5d84a402b-kube-api-access-dt7xv\") pod \"busybox-5bc68d56bd-nn8t2\" (UID: \"7e3feeaa-06ce-4e40-8c0a-3ba5d84a402b\") " pod="default/busybox-5bc68d56bd-nn8t2"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-279658 -n multinode-279658
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-279658 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.27s)

                                                
                                    

Test pass (285/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.32
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
9 TestDownloadOnly/v1.16.0/DeleteAll 0.24
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.28.4/json-events 10.78
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.24
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.17
21 TestDownloadOnly/v1.29.0-rc.2/json-events 13.98
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.25
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.63
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 161.94
38 TestAddons/parallel/Registry 16.66
40 TestAddons/parallel/InspektorGadget 12.01
41 TestAddons/parallel/MetricsServer 6.5
44 TestAddons/parallel/CSI 70.03
45 TestAddons/parallel/Headlamp 11.54
46 TestAddons/parallel/CloudSpanner 6.6
47 TestAddons/parallel/LocalPath 8.53
48 TestAddons/parallel/NvidiaDevicePlugin 5.54
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.32
54 TestCertOptions 35.49
55 TestCertExpiration 233.9
57 TestForceSystemdFlag 31.66
58 TestForceSystemdEnv 39.3
64 TestErrorSpam/setup 29.32
65 TestErrorSpam/start 0.9
66 TestErrorSpam/status 1.15
67 TestErrorSpam/pause 1.86
68 TestErrorSpam/unpause 1.99
69 TestErrorSpam/stop 1.46
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 75.52
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 34.58
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.85
81 TestFunctional/serial/CacheCmd/cache/add_local 1.12
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.08
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.16
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.16
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
89 TestFunctional/serial/ExtraConfig 36.7
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.82
92 TestFunctional/serial/LogsFileCmd 1.83
93 TestFunctional/serial/InvalidService 4.33
95 TestFunctional/parallel/ConfigCmd 0.62
96 TestFunctional/parallel/DashboardCmd 10.41
97 TestFunctional/parallel/DryRun 0.53
98 TestFunctional/parallel/InternationalLanguage 0.28
99 TestFunctional/parallel/StatusCmd 1.27
103 TestFunctional/parallel/ServiceCmdConnect 12.82
104 TestFunctional/parallel/AddonsCmd 0.33
105 TestFunctional/parallel/PersistentVolumeClaim 25.82
107 TestFunctional/parallel/SSHCmd 0.75
108 TestFunctional/parallel/CpCmd 2.64
110 TestFunctional/parallel/FileSync 0.55
111 TestFunctional/parallel/CertSync 2.75
115 TestFunctional/parallel/NodeLabels 0.11
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
119 TestFunctional/parallel/License 0.31
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.53
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
133 TestFunctional/parallel/ProfileCmd/profile_list 0.44
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
135 TestFunctional/parallel/MountCmd/any-port 8.55
136 TestFunctional/parallel/ServiceCmd/List 0.58
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.67
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
139 TestFunctional/parallel/ServiceCmd/Format 0.55
140 TestFunctional/parallel/ServiceCmd/URL 0.44
141 TestFunctional/parallel/MountCmd/specific-port 2.16
142 TestFunctional/parallel/MountCmd/VerifyCleanup 3.26
143 TestFunctional/parallel/Version/short 0.18
144 TestFunctional/parallel/Version/components 1.56
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.88
150 TestFunctional/parallel/ImageCommands/Setup 1.8
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.45
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.99
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.21
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.96
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.27
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.99
161 TestFunctional/delete_addon-resizer_images 0.09
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 85.43
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.93
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.67
174 TestJSONOutput/start/Command 51.79
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.81
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.75
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.89
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.28
199 TestKicCustomNetwork/create_custom_network 42.93
200 TestKicCustomNetwork/use_default_bridge_network 33.22
201 TestKicExistingNetwork 35.49
202 TestKicCustomSubnet 37.9
203 TestKicStaticIP 38.16
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 67.27
208 TestMountStart/serial/StartWithMountFirst 6.5
209 TestMountStart/serial/VerifyMountFirst 0.3
210 TestMountStart/serial/StartWithMountSecond 6.74
211 TestMountStart/serial/VerifyMountSecond 0.31
212 TestMountStart/serial/DeleteFirst 1.67
213 TestMountStart/serial/VerifyMountPostDelete 0.3
214 TestMountStart/serial/Stop 1.23
215 TestMountStart/serial/RestartStopped 7.85
216 TestMountStart/serial/VerifyMountPostStop 0.31
219 TestMultiNode/serial/FreshStart2Nodes 126.83
220 TestMultiNode/serial/DeployApp2Nodes 5.12
222 TestMultiNode/serial/AddNode 48.88
223 TestMultiNode/serial/MultiNodeLabels 0.1
224 TestMultiNode/serial/ProfileList 0.35
225 TestMultiNode/serial/CopyFile 11.25
226 TestMultiNode/serial/StopNode 2.39
227 TestMultiNode/serial/StartAfterStop 13.26
228 TestMultiNode/serial/RestartKeepsNodes 121.25
229 TestMultiNode/serial/DeleteNode 5.25
230 TestMultiNode/serial/StopMultiNode 24.11
231 TestMultiNode/serial/RestartMultiNode 80.01
232 TestMultiNode/serial/ValidateNameConflict 35.75
237 TestPreload 175.07
239 TestScheduledStopUnix 108.53
242 TestInsufficientStorage 12.02
243 TestRunningBinaryUpgrade 113.82
245 TestKubernetesUpgrade 405.54
246 TestMissingContainerUpgrade 149.44
248 TestPause/serial/Start 82.84
249 TestPause/serial/SecondStartNoReconfiguration 31.67
250 TestPause/serial/Pause 0.99
251 TestPause/serial/VerifyStatus 0.44
252 TestPause/serial/Unpause 0.97
253 TestPause/serial/PauseAgain 1.58
254 TestPause/serial/DeletePaused 6.95
255 TestPause/serial/VerifyDeletedResources 0.5
256 TestStoppedBinaryUpgrade/Setup 1.14
257 TestStoppedBinaryUpgrade/Upgrade 80.84
258 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
268 TestNoKubernetes/serial/StartWithK8s 35.47
269 TestNoKubernetes/serial/StartWithStopK8s 12.58
270 TestNoKubernetes/serial/Start 6.15
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
272 TestNoKubernetes/serial/ProfileList 1.11
273 TestNoKubernetes/serial/Stop 1.23
274 TestNoKubernetes/serial/StartNoArgs 6.9
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
283 TestNetworkPlugins/group/false 5.36
288 TestStartStop/group/old-k8s-version/serial/FirstStart 127.25
290 TestStartStop/group/no-preload/serial/FirstStart 67.88
291 TestStartStop/group/old-k8s-version/serial/DeployApp 9.66
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.03
293 TestStartStop/group/old-k8s-version/serial/Stop 12.18
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
295 TestStartStop/group/old-k8s-version/serial/SecondStart 447.33
296 TestStartStop/group/no-preload/serial/DeployApp 9.46
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.78
298 TestStartStop/group/no-preload/serial/Stop 12.35
299 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
300 TestStartStop/group/no-preload/serial/SecondStart 366.07
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
303 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
304 TestStartStop/group/no-preload/serial/Pause 3.42
306 TestStartStop/group/embed-certs/serial/FirstStart 84.76
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.17
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
310 TestStartStop/group/old-k8s-version/serial/Pause 5.1
312 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.27
313 TestStartStop/group/embed-certs/serial/DeployApp 9.41
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
315 TestStartStop/group/embed-certs/serial/Stop 12.06
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
317 TestStartStop/group/embed-certs/serial/SecondStart 628.72
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.42
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.54
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.33
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 353.46
323 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
325 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
326 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.48
328 TestStartStop/group/newest-cni/serial/FirstStart 43.57
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.24
331 TestStartStop/group/newest-cni/serial/Stop 1.29
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
333 TestStartStop/group/newest-cni/serial/SecondStart 31.32
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
337 TestStartStop/group/newest-cni/serial/Pause 3.35
338 TestNetworkPlugins/group/auto/Start 78.83
339 TestNetworkPlugins/group/auto/KubeletFlags 0.38
340 TestNetworkPlugins/group/auto/NetCatPod 10.32
341 TestNetworkPlugins/group/auto/DNS 0.21
342 TestNetworkPlugins/group/auto/Localhost 0.18
343 TestNetworkPlugins/group/auto/HairPin 0.17
344 TestNetworkPlugins/group/kindnet/Start 77.13
345 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.16
347 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
348 TestStartStop/group/embed-certs/serial/Pause 3.89
349 TestNetworkPlugins/group/calico/Start 78.33
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.03
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
352 TestNetworkPlugins/group/kindnet/NetCatPod 12.39
353 TestNetworkPlugins/group/kindnet/DNS 0.36
354 TestNetworkPlugins/group/kindnet/Localhost 0.28
355 TestNetworkPlugins/group/kindnet/HairPin 0.25
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/custom-flannel/Start 57.84
358 TestNetworkPlugins/group/calico/KubeletFlags 0.51
359 TestNetworkPlugins/group/calico/NetCatPod 13.34
360 TestNetworkPlugins/group/calico/DNS 0.25
361 TestNetworkPlugins/group/calico/Localhost 0.23
362 TestNetworkPlugins/group/calico/HairPin 0.19
363 TestNetworkPlugins/group/enable-default-cni/Start 90.69
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.32
366 TestNetworkPlugins/group/custom-flannel/DNS 0.33
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.28
369 TestNetworkPlugins/group/flannel/Start 67.75
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.31
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
375 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/bridge/Start 94.23
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
378 TestNetworkPlugins/group/flannel/NetCatPod 10.38
379 TestNetworkPlugins/group/flannel/DNS 0.27
380 TestNetworkPlugins/group/flannel/Localhost 0.24
381 TestNetworkPlugins/group/flannel/HairPin 0.21
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
383 TestNetworkPlugins/group/bridge/NetCatPod 11.45
384 TestNetworkPlugins/group/bridge/DNS 0.18
385 TestNetworkPlugins/group/bridge/Localhost 0.16
386 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (12.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-172231 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-172231 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.321911759s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-172231
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-172231: exit status 85 (90.818627ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-172231 | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC |          |
	|         | -p download-only-172231        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:50:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:50:28.883280 1630440 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:50:28.883489 1630440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:50:28.883518 1630440 out.go:309] Setting ErrFile to fd 2...
	I0115 10:50:28.883540 1630440 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:50:28.883818 1630440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	W0115 10:50:28.883996 1630440 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17953-1625104/.minikube/config/config.json: open /home/jenkins/minikube-integration/17953-1625104/.minikube/config/config.json: no such file or directory
	I0115 10:50:28.884438 1630440 out.go:303] Setting JSON to true
	I0115 10:50:28.885329 1630440 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34371,"bootTime":1705281458,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0115 10:50:28.885429 1630440 start.go:138] virtualization:  
	I0115 10:50:28.888382 1630440 out.go:97] [download-only-172231] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 10:50:28.890693 1630440 out.go:169] MINIKUBE_LOCATION=17953
	W0115 10:50:28.888630 1630440 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball: no such file or directory
	I0115 10:50:28.888711 1630440 notify.go:220] Checking for updates...
	I0115 10:50:28.894349 1630440 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:50:28.896342 1630440 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 10:50:28.898551 1630440 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	I0115 10:50:28.900482 1630440 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0115 10:50:28.904216 1630440 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 10:50:28.904472 1630440 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:50:28.929273 1630440 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 10:50:28.929389 1630440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 10:50:29.014299 1630440 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-15 10:50:29.003809998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 10:50:29.014403 1630440 docker.go:295] overlay module found
	I0115 10:50:29.016455 1630440 out.go:97] Using the docker driver based on user configuration
	I0115 10:50:29.016491 1630440 start.go:298] selected driver: docker
	I0115 10:50:29.016498 1630440 start.go:902] validating driver "docker" against <nil>
	I0115 10:50:29.016623 1630440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 10:50:29.098459 1630440 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-15 10:50:29.088743597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 10:50:29.098623 1630440 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 10:50:29.098916 1630440 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0115 10:50:29.099068 1630440 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 10:50:29.101307 1630440 out.go:169] Using Docker driver with root privileges
	I0115 10:50:29.103374 1630440 cni.go:84] Creating CNI manager for ""
	I0115 10:50:29.103418 1630440 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 10:50:29.103434 1630440 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 10:50:29.103449 1630440 start_flags.go:321] config:
	{Name:download-only-172231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-172231 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:50:29.105482 1630440 out.go:97] Starting control plane node download-only-172231 in cluster download-only-172231
	I0115 10:50:29.105515 1630440 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 10:50:29.107269 1630440 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 10:50:29.107305 1630440 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 10:50:29.107518 1630440 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 10:50:29.125911 1630440 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 10:50:29.126861 1630440 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 10:50:29.126964 1630440 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 10:50:29.167276 1630440 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0115 10:50:29.167302 1630440 cache.go:56] Caching tarball of preloaded images
	I0115 10:50:29.167426 1630440 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 10:50:29.170062 1630440 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0115 10:50:29.170088 1630440 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0115 10:50:29.280893 1630440 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0115 10:50:33.952896 1630440 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 10:50:37.397287 1630440 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0115 10:50:37.397406 1630440 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0115 10:50:38.402172 1630440 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0115 10:50:38.402561 1630440 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/download-only-172231/config.json ...
	I0115 10:50:38.402595 1630440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/download-only-172231/config.json: {Name:mk052fd47b2681358b4f0bab9a7f490b063a7e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:50:38.402785 1630440 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0115 10:50:38.402959 1630440 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-172231"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-172231
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (10.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-982144 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-982144 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.780927681s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (10.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-982144
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-982144: exit status 85 (89.215139ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-172231 | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC |                     |
	|         | -p download-only-172231        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC | 15 Jan 24 10:50 UTC |
	| delete  | -p download-only-172231        | download-only-172231 | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC | 15 Jan 24 10:50 UTC |
	| start   | -o=json --download-only        | download-only-982144 | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC |                     |
	|         | -p download-only-982144        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:50:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:50:41.696054 1630599 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:50:41.696187 1630599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:50:41.696196 1630599 out.go:309] Setting ErrFile to fd 2...
	I0115 10:50:41.696202 1630599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:50:41.696444 1630599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 10:50:41.696867 1630599 out.go:303] Setting JSON to true
	I0115 10:50:41.697819 1630599 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34384,"bootTime":1705281458,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0115 10:50:41.697893 1630599 start.go:138] virtualization:  
	I0115 10:50:41.700033 1630599 out.go:97] [download-only-982144] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 10:50:41.701853 1630599 out.go:169] MINIKUBE_LOCATION=17953
	I0115 10:50:41.700354 1630599 notify.go:220] Checking for updates...
	I0115 10:50:41.705180 1630599 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:50:41.706771 1630599 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 10:50:41.708593 1630599 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	I0115 10:50:41.710428 1630599 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0115 10:50:41.714161 1630599 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 10:50:41.714455 1630599 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:50:41.741078 1630599 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 10:50:41.741200 1630599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 10:50:41.827742 1630599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-15 10:50:41.818128051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 10:50:41.827845 1630599 docker.go:295] overlay module found
	I0115 10:50:41.829774 1630599 out.go:97] Using the docker driver based on user configuration
	I0115 10:50:41.829802 1630599 start.go:298] selected driver: docker
	I0115 10:50:41.829809 1630599 start.go:902] validating driver "docker" against <nil>
	I0115 10:50:41.829909 1630599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 10:50:41.896849 1630599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-15 10:50:41.88737189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 10:50:41.897017 1630599 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 10:50:41.897305 1630599 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0115 10:50:41.897464 1630599 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 10:50:41.899680 1630599 out.go:169] Using Docker driver with root privileges
	I0115 10:50:41.901492 1630599 cni.go:84] Creating CNI manager for ""
	I0115 10:50:41.901515 1630599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 10:50:41.901527 1630599 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 10:50:41.901550 1630599 start_flags.go:321] config:
	{Name:download-only-982144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-982144 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:50:41.903267 1630599 out.go:97] Starting control plane node download-only-982144 in cluster download-only-982144
	I0115 10:50:41.903298 1630599 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 10:50:41.905191 1630599 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 10:50:41.905215 1630599 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:50:41.905380 1630599 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 10:50:41.923169 1630599 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 10:50:41.923328 1630599 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 10:50:41.923352 1630599 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 10:50:41.923362 1630599 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 10:50:41.923370 1630599 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 10:50:41.971851 1630599 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0115 10:50:41.971885 1630599 cache.go:56] Caching tarball of preloaded images
	I0115 10:50:41.972693 1630599 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:50:41.975048 1630599 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0115 10:50:41.975081 1630599 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0115 10:50:42.086257 1630599 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0115 10:50:50.747568 1630599 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0115 10:50:50.747683 1630599 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0115 10:50:51.670837 1630599 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0115 10:50:51.671210 1630599 profile.go:148] Saving config to /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/download-only-982144/config.json ...
	I0115 10:50:51.671245 1630599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/download-only-982144/config.json: {Name:mk2bdacb6d901c4182ad1b8f1a5c2eaae9ea4583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 10:50:51.671980 1630599 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0115 10:50:51.672190 1630599 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-982144"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-982144
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (13.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-492820 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-492820 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.975362309s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (13.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-492820
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-492820: exit status 85 (85.301802ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-172231 | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC |                     |
	|         | -p download-only-172231           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC | 15 Jan 24 10:50 UTC |
	| delete  | -p download-only-172231           | download-only-172231 | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC | 15 Jan 24 10:50 UTC |
	| start   | -o=json --download-only           | download-only-982144 | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC |                     |
	|         | -p download-only-982144           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC | 15 Jan 24 10:50 UTC |
	| delete  | -p download-only-982144           | download-only-982144 | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC | 15 Jan 24 10:50 UTC |
	| start   | -o=json --download-only           | download-only-492820 | jenkins | v1.32.0 | 15 Jan 24 10:50 UTC |                     |
	|         | -p download-only-492820           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 10:50:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 10:50:52.980647 1630762 out.go:296] Setting OutFile to fd 1 ...
	I0115 10:50:52.980869 1630762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:50:52.980894 1630762 out.go:309] Setting ErrFile to fd 2...
	I0115 10:50:52.980917 1630762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 10:50:52.981205 1630762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 10:50:52.981684 1630762 out.go:303] Setting JSON to true
	I0115 10:50:52.982628 1630762 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34395,"bootTime":1705281458,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0115 10:50:52.982730 1630762 start.go:138] virtualization:  
	I0115 10:50:52.985563 1630762 out.go:97] [download-only-492820] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 10:50:52.987689 1630762 out.go:169] MINIKUBE_LOCATION=17953
	I0115 10:50:52.985897 1630762 notify.go:220] Checking for updates...
	I0115 10:50:52.991067 1630762 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 10:50:52.992872 1630762 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 10:50:52.994687 1630762 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	I0115 10:50:52.996823 1630762 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0115 10:50:53.000218 1630762 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 10:50:53.000546 1630762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 10:50:53.027171 1630762 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 10:50:53.027303 1630762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 10:50:53.109347 1630762 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-15 10:50:53.099778147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 10:50:53.109466 1630762 docker.go:295] overlay module found
	I0115 10:50:53.111566 1630762 out.go:97] Using the docker driver based on user configuration
	I0115 10:50:53.111601 1630762 start.go:298] selected driver: docker
	I0115 10:50:53.111610 1630762 start.go:902] validating driver "docker" against <nil>
	I0115 10:50:53.111722 1630762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 10:50:53.179191 1630762 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-15 10:50:53.169788005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 10:50:53.179361 1630762 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 10:50:53.179640 1630762 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0115 10:50:53.179793 1630762 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 10:50:53.181972 1630762 out.go:169] Using Docker driver with root privileges
	I0115 10:50:53.183779 1630762 cni.go:84] Creating CNI manager for ""
	I0115 10:50:53.183803 1630762 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0115 10:50:53.183813 1630762 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 10:50:53.183838 1630762 start_flags.go:321] config:
	{Name:download-only-492820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-492820 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 10:50:53.185954 1630762 out.go:97] Starting control plane node download-only-492820 in cluster download-only-492820
	I0115 10:50:53.185977 1630762 cache.go:121] Beginning downloading kic base image for docker with crio
	I0115 10:50:53.187704 1630762 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 10:50:53.187731 1630762 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 10:50:53.187901 1630762 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 10:50:53.204782 1630762 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 10:50:53.204915 1630762 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 10:50:53.204935 1630762 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 10:50:53.204941 1630762 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 10:50:53.204949 1630762 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 10:50:53.249506 1630762 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0115 10:50:53.249531 1630762 cache.go:56] Caching tarball of preloaded images
	I0115 10:50:53.250290 1630762 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0115 10:50:53.252502 1630762 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0115 10:50:53.252527 1630762 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0115 10:50:53.363819 1630762 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:9d8119c6fd5c58f71de57a6fdbe27bf3 -> /home/jenkins/minikube-integration/17953-1625104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-492820"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-492820
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-204729 --alsologtostderr --binary-mirror http://127.0.0.1:38859 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-204729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-204729
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-944407
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-944407: exit status 85 (100.272663ms)

                                                
                                                
-- stdout --
	* Profile "addons-944407" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-944407"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-944407
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-944407: exit status 85 (89.937747ms)

                                                
                                                
-- stdout --
	* Profile "addons-944407" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-944407"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (161.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-944407 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-944407 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m41.942973204s)
--- PASS: TestAddons/Setup (161.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 44.920099ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-bnfg8" [b4acd00d-da91-4eb6-bd16-c83cf4d53f2c] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004788916s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nzd7c" [763da9f9-9a33-4227-acb2-c43f50b03261] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005211361s
addons_test.go:340: (dbg) Run:  kubectl --context addons-944407 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-944407 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-944407 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.471913612s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 ip
2024/01/15 10:54:07 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dlk8l" [cad29a88-55f8-4de8-90ba-faf65b766108] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00470345s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-944407
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-944407: (6.004250341s)
--- PASS: TestAddons/parallel/InspektorGadget (12.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 7.028483ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-6vpnh" [ccf80058-1c18-44ab-b238-6546e3a32eca] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005307879s
addons_test.go:415: (dbg) Run:  kubectl --context addons-944407 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-arm64 -p addons-944407 addons disable metrics-server --alsologtostderr -v=1: (1.388599567s)
--- PASS: TestAddons/parallel/MetricsServer (6.50s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 44.994903ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-944407 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-944407 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a1b5a302-d9de-4e0f-a04d-c0e08962a5a4] Pending
helpers_test.go:344: "task-pv-pod" [a1b5a302-d9de-4e0f-a04d-c0e08962a5a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a1b5a302-d9de-4e0f-a04d-c0e08962a5a4] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003521194s
addons_test.go:584: (dbg) Run:  kubectl --context addons-944407 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-944407 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-944407 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-944407 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-944407 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-944407 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-944407 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [473c60d2-4464-48a4-bff6-0690d224e783] Pending
helpers_test.go:344: "task-pv-pod-restore" [473c60d2-4464-48a4-bff6-0690d224e783] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [473c60d2-4464-48a4-bff6-0690d224e783] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003742869s
addons_test.go:626: (dbg) Run:  kubectl --context addons-944407 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-944407 delete pod task-pv-pod-restore: (1.295472362s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-944407 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-944407 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-944407 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.948139394s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (70.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-944407 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-944407 --alsologtostderr -v=1: (1.532503055s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-72t6n" [2aa63585-fd35-4704-908f-f2ea083b53f8] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-72t6n" [2aa63585-fd35-4704-908f-f2ea083b53f8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-72t6n" [2aa63585-fd35-4704-908f-f2ea083b53f8] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003558345s
--- PASS: TestAddons/parallel/Headlamp (11.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-dbfv8" [cdd0de17-b3fd-468c-8d02-b314cc285d8f] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003597566s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-944407
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-944407 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-944407 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-944407 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ff7c09d4-e803-4f73-a817-c1dbaa693cf7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ff7c09d4-e803-4f73-a817-c1dbaa693cf7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ff7c09d4-e803-4f73-a817-c1dbaa693cf7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003541816s
addons_test.go:891: (dbg) Run:  kubectl --context addons-944407 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 ssh "cat /opt/local-path-provisioner/pvc-3f6288d4-f87d-452a-a480-4172734919f2_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-944407 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-944407 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-944407 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wlxzq" [13278ec8-c26a-491b-a4a8-b0324424d3a7] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004371035s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-944407
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-8bsq2" [f565fb40-fdec-4ad1-b570-d49c856945dc] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003878554s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-944407 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-944407 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-944407
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-944407: (11.983720171s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-944407
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-944407
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-944407
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (35.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-588649 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-588649 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.741430276s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-588649 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-588649 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-588649 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-588649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-588649
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-588649: (2.019732801s)
--- PASS: TestCertOptions (35.49s)

                                                
                                    
x
+
TestCertExpiration (233.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-405895 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0115 11:33:51.214947 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-405895 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (32.872155283s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-405895 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-405895 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.585581756s)
helpers_test.go:175: Cleaning up "cert-expiration-405895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-405895
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-405895: (2.437813175s)
--- PASS: TestCertExpiration (233.90s)

                                                
                                    
x
+
TestForceSystemdFlag (31.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-432636 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-432636 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.928473942s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-432636 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-432636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-432636
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-432636: (2.41899804s)
--- PASS: TestForceSystemdFlag (31.66s)

                                                
                                    
x
+
TestForceSystemdEnv (39.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-145251 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0115 11:31:54.261649 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-145251 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.840319482s)
helpers_test.go:175: Cleaning up "force-systemd-env-145251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-145251
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-145251: (2.45825178s)
--- PASS: TestForceSystemdEnv (39.30s)

                                                
                                    
x
+
TestErrorSpam/setup (29.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-848670 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-848670 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-848670 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-848670 --driver=docker  --container-runtime=crio: (29.314379353s)
--- PASS: TestErrorSpam/setup (29.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 start --dry-run
--- PASS: TestErrorSpam/start (0.90s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 stop: (1.220131499s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-848670 --log_dir /tmp/nospam-848670 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17953-1625104/.minikube/files/etc/test/nested/copy/1630435/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-641147 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0115 10:58:51.214739 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:51.221720 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:51.231954 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:51.252201 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:51.292449 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:51.372710 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:51.533846 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:51.854355 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:52.495305 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:53.775502 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:58:56.335709 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:59:01.456575 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:59:11.696776 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 10:59:32.177604 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-641147 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.51761733s)
--- PASS: TestFunctional/serial/StartWithProxy (75.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-641147 --alsologtostderr -v=8
E0115 11:00:13.138679 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-641147 --alsologtostderr -v=8: (34.582345481s)
functional_test.go:659: soft start took 34.584439738s for "functional-641147" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-641147 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 cache add registry.k8s.io/pause:3.1: (1.278226584s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 cache add registry.k8s.io/pause:3.3: (1.3449869s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 cache add registry.k8s.io/pause:latest: (1.231463541s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-641147 /tmp/TestFunctionalserialCacheCmdcacheadd_local1422483856/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 cache add minikube-local-cache-test:functional-641147
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 cache delete minikube-local-cache-test:functional-641147
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-641147
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (324.982551ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 cache reload: (1.137226861s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 kubectl -- --context functional-641147 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-641147 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-641147 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-641147 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.697032891s)
functional_test.go:757: restart took 36.69712495s for "functional-641147" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-641147 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 logs: (1.821963997s)
--- PASS: TestFunctional/serial/LogsCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 logs --file /tmp/TestFunctionalserialLogsFileCmd1412953198/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 logs --file /tmp/TestFunctionalserialLogsFileCmd1412953198/001/logs.txt: (1.830351379s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-641147 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-641147
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-641147: exit status 115 (686.729488ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30308 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-641147 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 config get cpus: exit status 14 (94.535448ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 config get cpus: exit status 14 (122.808915ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-641147 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-641147 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1654524: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-641147 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-641147 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (228.653073ms)

                                                
                                                
-- stdout --
	* [functional-641147] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:01:49.228724 1654290 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:01:49.228897 1654290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:01:49.228904 1654290 out.go:309] Setting ErrFile to fd 2...
	I0115 11:01:49.228910 1654290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:01:49.229239 1654290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 11:01:49.229651 1654290 out.go:303] Setting JSON to false
	I0115 11:01:49.230616 1654290 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35051,"bootTime":1705281458,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0115 11:01:49.230687 1654290 start.go:138] virtualization:  
	I0115 11:01:49.234121 1654290 out.go:177] * [functional-641147] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 11:01:49.237808 1654290 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 11:01:49.240419 1654290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:01:49.237983 1654290 notify.go:220] Checking for updates...
	I0115 11:01:49.245764 1654290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:01:49.248556 1654290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	I0115 11:01:49.251302 1654290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 11:01:49.253751 1654290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:01:49.256939 1654290 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:01:49.257524 1654290 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:01:49.283169 1654290 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:01:49.283281 1654290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:01:49.370322 1654290 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-15 11:01:49.360264942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 11:01:49.370419 1654290 docker.go:295] overlay module found
	I0115 11:01:49.374994 1654290 out.go:177] * Using the docker driver based on existing profile
	I0115 11:01:49.377534 1654290 start.go:298] selected driver: docker
	I0115 11:01:49.377553 1654290 start.go:902] validating driver "docker" against &{Name:functional-641147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-641147 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:01:49.377664 1654290 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:01:49.380861 1654290 out.go:177] 
	W0115 11:01:49.383816 1654290 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0115 11:01:49.386363 1654290 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-641147 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-641147 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-641147 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (276.413363ms)

                                                
                                                
-- stdout --
	* [functional-641147] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:01:48.998416 1654228 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:01:48.998674 1654228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:01:48.998718 1654228 out.go:309] Setting ErrFile to fd 2...
	I0115 11:01:48.998739 1654228 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:01:48.999757 1654228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 11:01:49.001545 1654228 out.go:303] Setting JSON to false
	I0115 11:01:49.003001 1654228 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35051,"bootTime":1705281458,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0115 11:01:49.003145 1654228 start.go:138] virtualization:  
	I0115 11:01:49.006554 1654228 out.go:177] * [functional-641147] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0115 11:01:49.009987 1654228 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 11:01:49.010086 1654228 notify.go:220] Checking for updates...
	I0115 11:01:49.012650 1654228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:01:49.015784 1654228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:01:49.018460 1654228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	I0115 11:01:49.021268 1654228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 11:01:49.023907 1654228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:01:49.026982 1654228 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:01:49.027661 1654228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:01:49.051789 1654228 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:01:49.051921 1654228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:01:49.141546 1654228 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-15 11:01:49.130637492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 11:01:49.141652 1654228 docker.go:295] overlay module found
	I0115 11:01:49.146362 1654228 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0115 11:01:49.148772 1654228 start.go:298] selected driver: docker
	I0115 11:01:49.148794 1654228 start.go:902] validating driver "docker" against &{Name:functional-641147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-641147 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 11:01:49.148896 1654228 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:01:49.152130 1654228 out.go:177] 
	W0115 11:01:49.154722 1654228 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0115 11:01:49.157184 1654228 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-641147 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-641147 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-4j9pm" [28c9c983-e141-4720-8368-8d0a645238da] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-4j9pm" [28c9c983-e141-4720-8368-8d0a645238da] Running
E0115 11:01:35.059092 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004457035s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30697
functional_test.go:1674: http://192.168.49.2:30697: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-4j9pm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30697
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e5e46f7b-baca-4376-bc18-cd9a295cb967] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004438707s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-641147 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-641147 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-641147 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-641147 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f540a52e-a25c-44e7-b5b1-d4e9b14da525] Pending
helpers_test.go:344: "sp-pod" [f540a52e-a25c-44e7-b5b1-d4e9b14da525] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f540a52e-a25c-44e7-b5b1-d4e9b14da525] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003919827s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-641147 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-641147 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-641147 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c5548576-c768-41d3-b7cc-40a5575801dc] Pending
helpers_test.go:344: "sp-pod" [c5548576-c768-41d3-b7cc-40a5575801dc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003993162s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-641147 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh -n functional-641147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 cp functional-641147:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3510253551/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh -n functional-641147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh -n functional-641147 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/1630435/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo cat /etc/test/nested/copy/1630435/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/1630435.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo cat /etc/ssl/certs/1630435.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/1630435.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo cat /usr/share/ca-certificates/1630435.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/16304352.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo cat /etc/ssl/certs/16304352.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/16304352.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo cat /usr/share/ca-certificates/16304352.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-641147 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 ssh "sudo systemctl is-active docker": exit status 1 (419.861725ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 ssh "sudo systemctl is-active containerd": exit status 1 (324.110678ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-641147 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-641147 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-641147 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-641147 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1652373: os: process already finished
helpers_test.go:502: unable to terminate pid 1652212: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-641147 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-641147 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9f298a47-82e7-4536-9777-cf552f976def] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9f298a47-82e7-4536-9777-cf552f976def] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004273182s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-641147 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.29.212 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-641147 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-641147 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-641147 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-dpgxq" [05073360-466f-4bed-ae12-dccc5f0086c6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-dpgxq" [05073360-466f-4bed-ae12-dccc5f0086c6] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.010383781s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "361.821031ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "78.614803ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "359.911294ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "77.491722ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdany-port659001967/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705316504405314194" to /tmp/TestFunctionalparallelMountCmdany-port659001967/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705316504405314194" to /tmp/TestFunctionalparallelMountCmdany-port659001967/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705316504405314194" to /tmp/TestFunctionalparallelMountCmdany-port659001967/001/test-1705316504405314194
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.740044ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 15 11:01 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 15 11:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 15 11:01 test-1705316504405314194
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh cat /mount-9p/test-1705316504405314194
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-641147 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bf0d111b-5323-455c-a2f4-e3d07603584a] Pending
helpers_test.go:344: "busybox-mount" [bf0d111b-5323-455c-a2f4-e3d07603584a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bf0d111b-5323-455c-a2f4-e3d07603584a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bf0d111b-5323-455c-a2f4-e3d07603584a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004172926s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-641147 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdany-port659001967/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 service list -o json
functional_test.go:1493: Took "668.904584ms" to run "out/minikube-linux-arm64 -p functional-641147 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31186
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31186
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdspecific-port2754923096/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (400.415722ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdspecific-port2754923096/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 ssh "sudo umount -f /mount-9p": exit status 1 (448.039924ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-641147 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdspecific-port2754923096/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282003983/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282003983/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282003983/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T" /mount1: exit status 1 (1.171025481s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-641147 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282003983/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282003983/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-641147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282003983/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 version --short
--- PASS: TestFunctional/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 version -o=json --components: (1.557531677s)
--- PASS: TestFunctional/parallel/Version/components (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-641147 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-641147
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-641147 image ls --format short --alsologtostderr:
I0115 11:02:19.897799 1656796 out.go:296] Setting OutFile to fd 1 ...
I0115 11:02:19.897982 1656796 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:19.898011 1656796 out.go:309] Setting ErrFile to fd 2...
I0115 11:02:19.898032 1656796 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:19.898322 1656796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
I0115 11:02:19.901692 1656796 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:19.901907 1656796 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:19.902743 1656796 cli_runner.go:164] Run: docker container inspect functional-641147 --format={{.State.Status}}
I0115 11:02:19.922593 1656796 ssh_runner.go:195] Run: systemctl --version
I0115 11:02:19.922648 1656796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-641147
I0115 11:02:19.943351 1656796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34729 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/functional-641147/id_rsa Username:docker}
I0115 11:02:20.040613 1656796 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-641147 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| docker.io/library/nginx                 | alpine             | 74077e780ec71 | 45.3MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | latest             | 6c7be49d2a11c | 196MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| gcr.io/google-containers/addon-resizer  | functional-641147  | ffd4cfbbe753e | 34.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-641147 image ls --format table --alsologtostderr:
I0115 11:02:20.579270 1656938 out.go:296] Setting OutFile to fd 1 ...
I0115 11:02:20.579516 1656938 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:20.579548 1656938 out.go:309] Setting ErrFile to fd 2...
I0115 11:02:20.579568 1656938 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:20.579848 1656938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
I0115 11:02:20.580574 1656938 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:20.580830 1656938 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:20.581427 1656938 cli_runner.go:164] Run: docker container inspect functional-641147 --format={{.State.Status}}
I0115 11:02:20.611097 1656938 ssh_runner.go:195] Run: systemctl --version
I0115 11:02:20.611156 1656938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-641147
I0115 11:02:20.632846 1656938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34729 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/functional-641147/id_rsa Username:docker}
I0115 11:02:20.738507 1656938 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-641147 image ls --format json --alsologtostderr:
[{"id":"6c7be49d2a11cfab9a87362ad27d447b45931e43dfa6919a8e1398ec09c1e353","repoDigests":["docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac","docker.io/library/nginx@sha256:523c417937604bc107d799e5cad1ae2ca8a9fd46306634fa2c547dc6220ec17c"],"repoTags":["docker.io/library/nginx:latest"],"size":"196113558"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d
5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93e
fc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448","repoDigests":["docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45330189"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[
"registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b4
5bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a74
3a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0
b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-641147"],"size":"34114467"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-641147 image ls --format json --alsologtostderr:
I0115 11:02:20.240946 1656856 out.go:296] Setting OutFile to fd 1 ...
I0115 11:02:20.241093 1656856 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:20.241102 1656856 out.go:309] Setting ErrFile to fd 2...
I0115 11:02:20.241108 1656856 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:20.241353 1656856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
I0115 11:02:20.242023 1656856 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:20.242170 1656856 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:20.242791 1656856 cli_runner.go:164] Run: docker container inspect functional-641147 --format={{.State.Status}}
I0115 11:02:20.271506 1656856 ssh_runner.go:195] Run: systemctl --version
I0115 11:02:20.271566 1656856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-641147
I0115 11:02:20.320530 1656856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34729 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/functional-641147/id_rsa Username:docker}
I0115 11:02:20.420196 1656856 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-641147 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 6c7be49d2a11cfab9a87362ad27d447b45931e43dfa6919a8e1398ec09c1e353
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
- docker.io/library/nginx@sha256:523c417937604bc107d799e5cad1ae2ca8a9fd46306634fa2c547dc6220ec17c
repoTags:
- docker.io/library/nginx:latest
size: "196113558"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "45330189"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-641147
size: "34114467"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-641147 image ls --format yaml --alsologtostderr:
I0115 11:02:19.902079 1656797 out.go:296] Setting OutFile to fd 1 ...
I0115 11:02:19.902202 1656797 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:19.902210 1656797 out.go:309] Setting ErrFile to fd 2...
I0115 11:02:19.902227 1656797 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:19.902556 1656797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
I0115 11:02:19.903496 1656797 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:19.903692 1656797 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:19.904278 1656797 cli_runner.go:164] Run: docker container inspect functional-641147 --format={{.State.Status}}
I0115 11:02:19.924652 1656797 ssh_runner.go:195] Run: systemctl --version
I0115 11:02:19.924706 1656797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-641147
I0115 11:02:19.945038 1656797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34729 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/functional-641147/id_rsa Username:docker}
I0115 11:02:20.053379 1656797 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-641147 ssh pgrep buildkitd: exit status 1 (362.304318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image build -t localhost/my-image:functional-641147 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 image build -t localhost/my-image:functional-641147 testdata/build --alsologtostderr: (2.247121829s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-641147 image build -t localhost/my-image:functional-641147 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0699895b488
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-641147
--> 79c37374807
Successfully tagged localhost/my-image:functional-641147
79c373748079e1d45201ea195e86660dd9c97bff8da76d654dc7a4c45981228b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-641147 image build -t localhost/my-image:functional-641147 testdata/build --alsologtostderr:
I0115 11:02:20.563705 1656934 out.go:296] Setting OutFile to fd 1 ...
I0115 11:02:20.564868 1656934 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:20.564880 1656934 out.go:309] Setting ErrFile to fd 2...
I0115 11:02:20.564887 1656934 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 11:02:20.565195 1656934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
I0115 11:02:20.565910 1656934 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:20.566713 1656934 config.go:182] Loaded profile config "functional-641147": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0115 11:02:20.567293 1656934 cli_runner.go:164] Run: docker container inspect functional-641147 --format={{.State.Status}}
I0115 11:02:20.590788 1656934 ssh_runner.go:195] Run: systemctl --version
I0115 11:02:20.590847 1656934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-641147
I0115 11:02:20.612118 1656934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34729 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/functional-641147/id_rsa Username:docker}
I0115 11:02:20.719980 1656934 build_images.go:151] Building image from path: /tmp/build.800025383.tar
I0115 11:02:20.720054 1656934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0115 11:02:20.732600 1656934 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.800025383.tar
I0115 11:02:20.739740 1656934 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.800025383.tar: stat -c "%s %y" /var/lib/minikube/build/build.800025383.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.800025383.tar': No such file or directory
I0115 11:02:20.739772 1656934 ssh_runner.go:362] scp /tmp/build.800025383.tar --> /var/lib/minikube/build/build.800025383.tar (3072 bytes)
I0115 11:02:20.775624 1656934 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.800025383
I0115 11:02:20.787628 1656934 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.800025383 -xf /var/lib/minikube/build/build.800025383.tar
I0115 11:02:20.803946 1656934 crio.go:297] Building image: /var/lib/minikube/build/build.800025383
I0115 11:02:20.804033 1656934 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-641147 /var/lib/minikube/build/build.800025383 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0115 11:02:22.681974 1656934 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-641147 /var/lib/minikube/build/build.800025383 --cgroup-manager=cgroupfs: (1.877912781s)
I0115 11:02:22.682047 1656934 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.800025383
I0115 11:02:22.693917 1656934 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.800025383.tar
I0115 11:02:22.706917 1656934 build_images.go:207] Built localhost/my-image:functional-641147 from /tmp/build.800025383.tar
I0115 11:02:22.707003 1656934 build_images.go:123] succeeded building to: functional-641147
I0115 11:02:22.707017 1656934 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/01/15 11:01:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.754718496s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-641147
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image load --daemon gcr.io/google-containers/addon-resizer:functional-641147 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 image load --daemon gcr.io/google-containers/addon-resizer:functional-641147 --alsologtostderr: (5.181576132s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image load --daemon gcr.io/google-containers/addon-resizer:functional-641147 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 image load --daemon gcr.io/google-containers/addon-resizer:functional-641147 --alsologtostderr: (2.729415235s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.283316177s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-641147
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image load --daemon gcr.io/google-containers/addon-resizer:functional-641147 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 image load --daemon gcr.io/google-containers/addon-resizer:functional-641147 --alsologtostderr: (3.638540229s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image save gcr.io/google-containers/addon-resizer:functional-641147 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image rm gcr.io/google-containers/addon-resizer:functional-641147 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-641147 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.020636545s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-641147
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-641147 image save --daemon gcr.io/google-containers/addon-resizer:functional-641147 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-641147
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-641147
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-641147
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-641147
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (85.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-406064 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0115 11:03:51.214353 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-406064 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m25.428543715s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (85.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-406064 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-406064 addons enable ingress --alsologtostderr -v=5: (11.930101878s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.93s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-406064 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-901062 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0115 11:07:38.618433 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-901062 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (51.783749441s)
--- PASS: TestJSONOutput/start/Command (51.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-901062 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-901062 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-901062 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-901062 --output=json --user=testUser: (5.893650019s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-600446 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-600446 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (109.186321ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fe4a1bbe-e239-441c-a89a-896c63815a98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-600446] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"428fa3b4-0c90-48ed-9d65-bf98c42d5315","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17953"}}
	{"specversion":"1.0","id":"2ffb1b34-d297-41af-b134-4c008e22a722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f0e90291-1e53-4380-9bdc-96d1a2083e9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig"}}
	{"specversion":"1.0","id":"9ecf0e8a-25f4-40c0-b39f-5d908baf1a51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube"}}
	{"specversion":"1.0","id":"665565f1-166b-4a9b-9431-a58fe45481fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"15665533-8204-4426-a7ee-1198b4a9c282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b29cfa76-e088-45f6-bee1-6a2fd8544404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-600446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-600446
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-754595 --network=
E0115 11:08:51.215312 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-754595 --network=: (40.804142486s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-754595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-754595
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-754595: (2.098933147s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-629800 --network=bridge
E0115 11:09:00.538646 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:09:03.993268 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:03.998491 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:04.008704 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:04.028929 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:04.069487 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:04.149745 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:04.310378 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:04.630869 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:05.271497 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:06.551686 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:09.111836 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:14.233007 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:09:24.473567 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-629800 --network=bridge: (31.22878923s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-629800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-629800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-629800: (1.958120278s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.22s)

                                                
                                    
x
+
TestKicExistingNetwork (35.49s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-361944 --network=existing-network
E0115 11:09:44.953724 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-361944 --network=existing-network: (33.251575588s)
helpers_test.go:175: Cleaning up "existing-network-361944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-361944
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-361944: (2.058708735s)
--- PASS: TestKicExistingNetwork (35.49s)

                                                
                                    
x
+
TestKicCustomSubnet (37.9s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-028998 --subnet=192.168.60.0/24
E0115 11:10:25.914406 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-028998 --subnet=192.168.60.0/24: (35.694819173s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-028998 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-028998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-028998
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-028998: (2.181883591s)
--- PASS: TestKicCustomSubnet (37.90s)

                                                
                                    
x
+
TestKicStaticIP (38.16s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-073827 --static-ip=192.168.200.200
E0115 11:11:16.697360 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-073827 --static-ip=192.168.200.200: (35.910616367s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-073827 ip
helpers_test.go:175: Cleaning up "static-ip-073827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-073827
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-073827: (2.072203462s)
--- PASS: TestKicStaticIP (38.16s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-661316 --driver=docker  --container-runtime=crio
E0115 11:11:44.378851 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:11:47.834683 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-661316 --driver=docker  --container-runtime=crio: (29.687109307s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-663916 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-663916 --driver=docker  --container-runtime=crio: (31.937608195s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-661316
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-663916
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-663916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-663916
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-663916: (1.989664829s)
helpers_test.go:175: Cleaning up "first-661316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-661316
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-661316: (2.338487239s)
--- PASS: TestMinikubeProfile (67.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-679878 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-679878 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.504179322s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-679878 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-681675 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-681675 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.740768488s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-681675 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-679878 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-679878 --alsologtostderr -v=5: (1.668499283s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-681675 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-681675
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-681675: (1.229196622s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-681675
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-681675: (6.852335789s)
--- PASS: TestMountStart/serial/RestartStopped (7.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-681675 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279658 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0115 11:13:51.215164 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 11:14:03.993165 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:14:31.675205 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-279658 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m6.244582901s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-279658 -- rollout status deployment/busybox: (2.928963853s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-drm6d -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-nn8t2 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-drm6d -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-nn8t2 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-drm6d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-279658 -- exec busybox-5bc68d56bd-nn8t2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-279658 -v 3 --alsologtostderr
E0115 11:15:14.260837 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-279658 -v 3 --alsologtostderr: (48.163757474s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-279658 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp testdata/cp-test.txt multinode-279658:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp multinode-279658:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324863094/001/cp-test_multinode-279658.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp multinode-279658:/home/docker/cp-test.txt multinode-279658-m02:/home/docker/cp-test_multinode-279658_multinode-279658-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m02 "sudo cat /home/docker/cp-test_multinode-279658_multinode-279658-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp multinode-279658:/home/docker/cp-test.txt multinode-279658-m03:/home/docker/cp-test_multinode-279658_multinode-279658-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m03 "sudo cat /home/docker/cp-test_multinode-279658_multinode-279658-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp testdata/cp-test.txt multinode-279658-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp multinode-279658-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324863094/001/cp-test_multinode-279658-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp multinode-279658-m02:/home/docker/cp-test.txt multinode-279658:/home/docker/cp-test_multinode-279658-m02_multinode-279658.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658 "sudo cat /home/docker/cp-test_multinode-279658-m02_multinode-279658.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp multinode-279658-m02:/home/docker/cp-test.txt multinode-279658-m03:/home/docker/cp-test_multinode-279658-m02_multinode-279658-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m03 "sudo cat /home/docker/cp-test_multinode-279658-m02_multinode-279658-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp testdata/cp-test.txt multinode-279658-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp multinode-279658-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3324863094/001/cp-test_multinode-279658-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp multinode-279658-m03:/home/docker/cp-test.txt multinode-279658:/home/docker/cp-test_multinode-279658-m03_multinode-279658.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658 "sudo cat /home/docker/cp-test_multinode-279658-m03_multinode-279658.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 cp multinode-279658-m03:/home/docker/cp-test.txt multinode-279658-m02:/home/docker/cp-test_multinode-279658-m03_multinode-279658-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 ssh -n multinode-279658-m02 "sudo cat /home/docker/cp-test_multinode-279658-m03_multinode-279658-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-279658 node stop m03: (1.231507556s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-279658 status: exit status 7 (586.422904ms)

                                                
                                                
-- stdout --
	multinode-279658
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-279658-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-279658-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-279658 status --alsologtostderr: exit status 7 (576.279964ms)

                                                
                                                
-- stdout --
	multinode-279658
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-279658-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-279658-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:16:15.971098 1703529 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:16:15.971260 1703529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:16:15.971271 1703529 out.go:309] Setting ErrFile to fd 2...
	I0115 11:16:15.971295 1703529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:16:15.971572 1703529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 11:16:15.971810 1703529 out.go:303] Setting JSON to false
	I0115 11:16:15.971884 1703529 mustload.go:65] Loading cluster: multinode-279658
	I0115 11:16:15.971968 1703529 notify.go:220] Checking for updates...
	I0115 11:16:15.972336 1703529 config.go:182] Loaded profile config "multinode-279658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:16:15.972355 1703529 status.go:255] checking status of multinode-279658 ...
	I0115 11:16:15.973308 1703529 cli_runner.go:164] Run: docker container inspect multinode-279658 --format={{.State.Status}}
	I0115 11:16:15.992019 1703529 status.go:330] multinode-279658 host status = "Running" (err=<nil>)
	I0115 11:16:15.992054 1703529 host.go:66] Checking if "multinode-279658" exists ...
	I0115 11:16:15.992336 1703529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279658
	I0115 11:16:16.011847 1703529 host.go:66] Checking if "multinode-279658" exists ...
	I0115 11:16:16.012144 1703529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:16:16.012186 1703529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658
	I0115 11:16:16.045842 1703529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34794 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658/id_rsa Username:docker}
	I0115 11:16:16.144978 1703529 ssh_runner.go:195] Run: systemctl --version
	I0115 11:16:16.150766 1703529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:16:16.164298 1703529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:16:16.239078 1703529 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-15 11:16:16.228071886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 11:16:16.239729 1703529 kubeconfig.go:92] found "multinode-279658" server: "https://192.168.58.2:8443"
	I0115 11:16:16.239752 1703529 api_server.go:166] Checking apiserver status ...
	I0115 11:16:16.239794 1703529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 11:16:16.252830 1703529 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I0115 11:16:16.264601 1703529 api_server.go:182] apiserver freezer: "4:freezer:/docker/a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5/crio/crio-84d066fd93439a19486fb0ebc2853a2b491faefc8795ce2c300608f039e84a0b"
	I0115 11:16:16.264674 1703529 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a18a2b3c9b565e6af2c30d7338137b4960649a9ec9dbde78f7aef931d0441cd5/crio/crio-84d066fd93439a19486fb0ebc2853a2b491faefc8795ce2c300608f039e84a0b/freezer.state
	I0115 11:16:16.275121 1703529 api_server.go:204] freezer state: "THAWED"
	I0115 11:16:16.275149 1703529 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0115 11:16:16.284361 1703529 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0115 11:16:16.284389 1703529 status.go:421] multinode-279658 apiserver status = Running (err=<nil>)
	I0115 11:16:16.284403 1703529 status.go:257] multinode-279658 status: &{Name:multinode-279658 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 11:16:16.284435 1703529 status.go:255] checking status of multinode-279658-m02 ...
	I0115 11:16:16.284752 1703529 cli_runner.go:164] Run: docker container inspect multinode-279658-m02 --format={{.State.Status}}
	I0115 11:16:16.305127 1703529 status.go:330] multinode-279658-m02 host status = "Running" (err=<nil>)
	I0115 11:16:16.305151 1703529 host.go:66] Checking if "multinode-279658-m02" exists ...
	I0115 11:16:16.305471 1703529 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-279658-m02
	I0115 11:16:16.329773 1703529 host.go:66] Checking if "multinode-279658-m02" exists ...
	I0115 11:16:16.330079 1703529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 11:16:16.330166 1703529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-279658-m02
	I0115 11:16:16.347666 1703529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34799 SSHKeyPath:/home/jenkins/minikube-integration/17953-1625104/.minikube/machines/multinode-279658-m02/id_rsa Username:docker}
	I0115 11:16:16.444518 1703529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 11:16:16.457887 1703529 status.go:257] multinode-279658-m02 status: &{Name:multinode-279658-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0115 11:16:16.457922 1703529 status.go:255] checking status of multinode-279658-m03 ...
	I0115 11:16:16.458235 1703529 cli_runner.go:164] Run: docker container inspect multinode-279658-m03 --format={{.State.Status}}
	I0115 11:16:16.480450 1703529 status.go:330] multinode-279658-m03 host status = "Stopped" (err=<nil>)
	I0115 11:16:16.480474 1703529 status.go:343] host is not running, skipping remaining checks
	I0115 11:16:16.480488 1703529 status.go:257] multinode-279658-m03 status: &{Name:multinode-279658-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 node start m03 --alsologtostderr
E0115 11:16:16.696381 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-279658 node start m03 --alsologtostderr: (12.377147699s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (121.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-279658
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-279658
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-279658: (24.874456631s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279658 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-279658 --wait=true -v=8 --alsologtostderr: (1m36.212185048s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-279658
--- PASS: TestMultiNode/serial/RestartKeepsNodes (121.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-279658 node delete m03: (4.439767563s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 stop
E0115 11:18:51.215015 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-279658 stop: (23.818180041s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-279658 status: exit status 7 (168.587008ms)

                                                
                                                
-- stdout --
	multinode-279658
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-279658-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-279658 status --alsologtostderr: exit status 7 (122.819755ms)

                                                
                                                
-- stdout --
	multinode-279658
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-279658-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:19:00.302721 1711689 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:19:00.302981 1711689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:19:00.303008 1711689 out.go:309] Setting ErrFile to fd 2...
	I0115 11:19:00.303030 1711689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:19:00.303383 1711689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 11:19:00.303665 1711689 out.go:303] Setting JSON to false
	I0115 11:19:00.303782 1711689 mustload.go:65] Loading cluster: multinode-279658
	I0115 11:19:00.303909 1711689 notify.go:220] Checking for updates...
	I0115 11:19:00.304285 1711689 config.go:182] Loaded profile config "multinode-279658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0115 11:19:00.304298 1711689 status.go:255] checking status of multinode-279658 ...
	I0115 11:19:00.304994 1711689 cli_runner.go:164] Run: docker container inspect multinode-279658 --format={{.State.Status}}
	I0115 11:19:00.326905 1711689 status.go:330] multinode-279658 host status = "Stopped" (err=<nil>)
	I0115 11:19:00.326927 1711689 status.go:343] host is not running, skipping remaining checks
	I0115 11:19:00.326935 1711689 status.go:257] multinode-279658 status: &{Name:multinode-279658 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 11:19:00.326968 1711689 status.go:255] checking status of multinode-279658-m02 ...
	I0115 11:19:00.327279 1711689 cli_runner.go:164] Run: docker container inspect multinode-279658-m02 --format={{.State.Status}}
	I0115 11:19:00.346697 1711689 status.go:330] multinode-279658-m02 host status = "Stopped" (err=<nil>)
	I0115 11:19:00.346724 1711689 status.go:343] host is not running, skipping remaining checks
	I0115 11:19:00.346734 1711689 status.go:257] multinode-279658-m02 status: &{Name:multinode-279658-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279658 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0115 11:19:03.993784 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-279658 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.242593897s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-279658 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-279658
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279658-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-279658-m02 --driver=docker  --container-runtime=crio: exit status 14 (105.784222ms)

                                                
                                                
-- stdout --
	* [multinode-279658-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-279658-m02' is duplicated with machine name 'multinode-279658-m02' in profile 'multinode-279658'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-279658-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-279658-m03 --driver=docker  --container-runtime=crio: (33.180124728s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-279658
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-279658: exit status 80 (389.240333ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-279658
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-279658-m03 already exists in multinode-279658-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-279658-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-279658-m03: (2.007631823s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.75s)

                                                
                                    
x
+
TestPreload (175.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-259222 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0115 11:21:16.697315 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-259222 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m29.36216199s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-259222 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-259222 image pull gcr.io/k8s-minikube/busybox: (2.085866174s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-259222
E0115 11:22:39.739457 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-259222: (5.861113443s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-259222 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0115 11:23:51.215261 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-259222 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m15.08373099s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-259222 image list
helpers_test.go:175: Cleaning up "test-preload-259222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-259222
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-259222: (2.410600124s)
--- PASS: TestPreload (175.07s)

                                                
                                    
x
+
TestScheduledStopUnix (108.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-019648 --memory=2048 --driver=docker  --container-runtime=crio
E0115 11:24:03.992690 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-019648 --memory=2048 --driver=docker  --container-runtime=crio: (32.052476099s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-019648 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-019648 -n scheduled-stop-019648
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-019648 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-019648 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-019648 -n scheduled-stop-019648
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-019648
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-019648 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0115 11:25:27.035497 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-019648
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-019648: exit status 7 (83.235966ms)

                                                
                                                
-- stdout --
	scheduled-stop-019648
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-019648 -n scheduled-stop-019648
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-019648 -n scheduled-stop-019648: exit status 7 (86.627708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-019648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-019648
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-019648: (4.709546617s)
--- PASS: TestScheduledStopUnix (108.53s)

                                                
                                    
x
+
TestInsufficientStorage (12.02s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-448936 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-448936 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.395118315s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"729143d8-029d-4b67-9ea0-c1a3eb444f6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-448936] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5295140f-8f79-42d9-8c00-9a420f136aa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17953"}}
	{"specversion":"1.0","id":"02f9db87-3191-4f94-83d5-ab3c6504b350","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"791c8e3d-c70b-4866-96c1-2009aa464ab2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig"}}
	{"specversion":"1.0","id":"fa09876c-5f27-4a34-9128-d38b37f9311d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube"}}
	{"specversion":"1.0","id":"d0f690c7-a6f4-44cf-a2e5-0f15d11636de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"00d5f0ac-9641-49bc-bd8b-65eade9aab29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3ff4c440-e3dd-4b5b-bbb1-f8342f38933a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"20dadf5d-253a-4232-9488-b578151989ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"aed68d0a-ed9a-4481-9fb2-68f80336de16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2a6ab62-b278-4955-974c-9860652b1cc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"174396d9-cea3-42ce-8932-4a8123811e5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-448936 in cluster insufficient-storage-448936","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"54774a91-46f4-4f04-8066-3e24b307299e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ce462f1-0bc4-49f3-bf7e-5963a2d75680","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ec4c361-24b7-4051-a3ae-2460f74389fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-448936 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-448936 --output=json --layout=cluster: exit status 7 (349.861976ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-448936","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-448936","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 11:25:56.058388 1728141 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-448936" does not appear in /home/jenkins/minikube-integration/17953-1625104/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-448936 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-448936 --output=json --layout=cluster: exit status 7 (330.150458ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-448936","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-448936","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 11:25:56.391186 1728196 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-448936" does not appear in /home/jenkins/minikube-integration/17953-1625104/kubeconfig
	E0115 11:25:56.403642 1728196 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/insufficient-storage-448936/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-448936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-448936
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-448936: (1.945775154s)
--- PASS: TestInsufficientStorage (12.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (113.82s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3837763847 start -p running-upgrade-431290 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3837763847 start -p running-upgrade-431290 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.90386128s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-431290 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0115 11:31:16.696519 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-431290 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m13.967249692s)
helpers_test.go:175: Cleaning up "running-upgrade-431290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-431290
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-431290: (2.670050381s)
--- PASS: TestRunningBinaryUpgrade (113.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (405.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-413698 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-413698 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.515201819s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-413698
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-413698: (1.345220289s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-413698 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-413698 status --format={{.Host}}: exit status 7 (113.199736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-413698 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-413698 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m47.060592749s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-413698 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-413698 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-413698 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (200.171161ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-413698] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-413698
	    minikube start -p kubernetes-upgrade-413698 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4136982 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-413698 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-413698 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0115 11:34:03.992627 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-413698 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.899619553s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-413698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-413698
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-413698: (2.277791048s)
--- PASS: TestKubernetesUpgrade (405.54s)

                                                
                                    
x
+
TestMissingContainerUpgrade (149.44s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2700048516 start -p missing-upgrade-299643 --memory=2200 --driver=docker  --container-runtime=crio
E0115 11:26:16.696451 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2700048516 start -p missing-upgrade-299643 --memory=2200 --driver=docker  --container-runtime=crio: (1m12.561384512s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-299643
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-299643: (10.440163112s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-299643
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-299643 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-299643 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.556464577s)
helpers_test.go:175: Cleaning up "missing-upgrade-299643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-299643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-299643: (4.710288255s)
--- PASS: TestMissingContainerUpgrade (149.44s)

                                                
                                    
x
+
TestPause/serial/Start (82.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-636923 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-636923 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.844240181s)
--- PASS: TestPause/serial/Start (82.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-636923 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-636923 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.621498869s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.67s)

                                                
                                    
x
+
TestPause/serial/Pause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-636923 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.99s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-636923 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-636923 --output=json --layout=cluster: exit status 2 (440.970258ms)

                                                
                                                
-- stdout --
	{"Name":"pause-636923","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-636923","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-636923 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.97s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.58s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-636923 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-636923 --alsologtostderr -v=5: (1.575769468s)
--- PASS: TestPause/serial/PauseAgain (1.58s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (6.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-636923 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-636923 --alsologtostderr -v=5: (6.949488686s)
--- PASS: TestPause/serial/DeletePaused (6.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-636923
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-636923: exit status 1 (18.249475ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-636923: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (80.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3170723150 start -p stopped-upgrade-531873 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0115 11:28:51.215260 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 11:29:03.992783 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3170723150 start -p stopped-upgrade-531873 --memory=2200 --vm-driver=docker  --container-runtime=crio: (43.482622877s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3170723150 -p stopped-upgrade-531873 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3170723150 -p stopped-upgrade-531873 stop: (2.894305088s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-531873 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-531873 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.465957114s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (80.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-531873
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-531873: (1.097630613s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-740413 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-740413 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (97.493832ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-740413] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-740413 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-740413 --driver=docker  --container-runtime=crio: (35.093689677s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-740413 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-740413 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-740413 --no-kubernetes --driver=docker  --container-runtime=crio: (10.156208173s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-740413 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-740413 status -o json: exit status 2 (349.124611ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-740413","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-740413
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-740413: (2.071930725s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-740413 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-740413 --no-kubernetes --driver=docker  --container-runtime=crio: (6.151622725s)
--- PASS: TestNoKubernetes/serial/Start (6.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-740413 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-740413 "sudo systemctl is-active --quiet service kubelet": exit status 1 (402.826931ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-740413
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-740413: (1.225953449s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-740413 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-740413 --driver=docker  --container-runtime=crio: (6.903989359s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-740413 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-740413 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.153742ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-165703 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-165703 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (274.833003ms)

                                                
                                                
-- stdout --
	* [false-165703] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17953
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 11:33:37.771627 1764257 out.go:296] Setting OutFile to fd 1 ...
	I0115 11:33:37.771870 1764257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:33:37.771892 1764257 out.go:309] Setting ErrFile to fd 2...
	I0115 11:33:37.771910 1764257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 11:33:37.772168 1764257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17953-1625104/.minikube/bin
	I0115 11:33:37.772623 1764257 out.go:303] Setting JSON to false
	I0115 11:33:37.773553 1764257 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36960,"bootTime":1705281458,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0115 11:33:37.773647 1764257 start.go:138] virtualization:  
	I0115 11:33:37.777868 1764257 out.go:177] * [false-165703] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 11:33:37.779748 1764257 out.go:177]   - MINIKUBE_LOCATION=17953
	I0115 11:33:37.779840 1764257 notify.go:220] Checking for updates...
	I0115 11:33:37.782568 1764257 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 11:33:37.784465 1764257 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17953-1625104/kubeconfig
	I0115 11:33:37.786503 1764257 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17953-1625104/.minikube
	I0115 11:33:37.788216 1764257 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 11:33:37.792623 1764257 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 11:33:37.795288 1764257 config.go:182] Loaded profile config "kubernetes-upgrade-413698": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0115 11:33:37.795441 1764257 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 11:33:37.836849 1764257 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 11:33:37.836974 1764257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 11:33:37.944283 1764257 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-15 11:33:37.933937519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 11:33:37.944386 1764257 docker.go:295] overlay module found
	I0115 11:33:37.946511 1764257 out.go:177] * Using the docker driver based on user configuration
	I0115 11:33:37.948692 1764257 start.go:298] selected driver: docker
	I0115 11:33:37.948710 1764257 start.go:902] validating driver "docker" against <nil>
	I0115 11:33:37.948724 1764257 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 11:33:37.951191 1764257 out.go:177] 
	W0115 11:33:37.953190 1764257 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0115 11:33:37.955580 1764257 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-165703 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-165703" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 11:29:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-413698
contexts:
- context:
cluster: kubernetes-upgrade-413698
user: kubernetes-upgrade-413698
name: kubernetes-upgrade-413698
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-413698
user:
client-certificate: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kubernetes-upgrade-413698/client.crt
client-key: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kubernetes-upgrade-413698/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-165703

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165703"

                                                
                                                
----------------------- debugLogs end: false-165703 [took: 4.889325731s] --------------------------------
helpers_test.go:175: Cleaning up "false-165703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-165703
--- PASS: TestNetworkPlugins/group/false (5.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-064981 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0115 11:36:16.696757 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-064981 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m7.250510684s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-465205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-465205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m7.883974781s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-064981 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2bbcfcdb-4cb0-4500-b08f-8dc47be8999a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2bbcfcdb-4cb0-4500-b08f-8dc47be8999a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003486384s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-064981 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-064981 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-064981 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.869425032s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-064981 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-064981 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-064981 --alsologtostderr -v=3: (12.179611106s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-064981 -n old-k8s-version-064981
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-064981 -n old-k8s-version-064981: exit status 7 (119.016346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-064981 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (447.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-064981 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0115 11:38:51.214710 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-064981 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m26.899515479s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-064981 -n old-k8s-version-064981
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (447.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-465205 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [35dc71ce-042d-415d-b9ab-f94e0fcc86ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [35dc71ce-042d-415d-b9ab-f94e0fcc86ef] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003978965s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-465205 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-465205 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-465205 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.653668521s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-465205 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-465205 --alsologtostderr -v=3
E0115 11:39:03.992549 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-465205 --alsologtostderr -v=3: (12.345884775s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-465205 -n no-preload-465205
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-465205 -n no-preload-465205: exit status 7 (107.042473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-465205 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (366.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-465205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0115 11:39:19.740616 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:41:16.697102 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:42:07.036466 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:43:51.215293 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 11:44:03.993369 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-465205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (6m5.556425822s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-465205 -n no-preload-465205
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (366.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r7nhs" [20a5042d-98fa-4373-91de-a81e3be43db7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006156923s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r7nhs" [20a5042d-98fa-4373-91de-a81e3be43db7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003858134s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-465205 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-465205 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-465205 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-465205 -n no-preload-465205
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-465205 -n no-preload-465205: exit status 2 (380.973266ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-465205 -n no-preload-465205
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-465205 -n no-preload-465205: exit status 2 (363.225187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-465205 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-465205 -n no-preload-465205
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-465205 -n no-preload-465205
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-848307 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-848307 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m24.763711537s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lxg67" [04240bee-ac28-41b3-8b41-16d3cb236d34] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004616694s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lxg67" [04240bee-ac28-41b3-8b41-16d3cb236d34] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00367227s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-064981 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-064981 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-064981 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-064981 --alsologtostderr -v=1: (1.36413096s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-064981 -n old-k8s-version-064981
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-064981 -n old-k8s-version-064981: exit status 2 (608.205837ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-064981 -n old-k8s-version-064981
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-064981 -n old-k8s-version-064981: exit status 2 (556.148187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-064981 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-064981 --alsologtostderr -v=1: (1.285664496s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-064981 -n old-k8s-version-064981
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-064981 -n old-k8s-version-064981
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-812907 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-812907 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m20.273601137s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-848307 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [16361485-fd93-4ce2-aa24-b1b782fa2240] Pending
helpers_test.go:344: "busybox" [16361485-fd93-4ce2-aa24-b1b782fa2240] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [16361485-fd93-4ce2-aa24-b1b782fa2240] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00429749s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-848307 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-848307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-848307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.087950246s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-848307 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-848307 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-848307 --alsologtostderr -v=3: (12.059553916s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-848307 -n embed-certs-848307
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-848307 -n embed-certs-848307: exit status 7 (100.045905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-848307 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (628.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-848307 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-848307 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m28.256983502s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-848307 -n embed-certs-848307
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (628.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-812907 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [88870234-74f6-47e1-99da-4251244aec82] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [88870234-74f6-47e1-99da-4251244aec82] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003361839s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-812907 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-812907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-812907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.42236799s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-812907 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-812907 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-812907 --alsologtostderr -v=3: (12.330239168s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-812907 -n default-k8s-diff-port-812907
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-812907 -n default-k8s-diff-port-812907: exit status 7 (90.670233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-812907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-812907 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0115 11:48:04.853107 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:04.858373 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:04.868682 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:04.889011 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:04.929311 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:05.009708 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:05.170104 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:05.490843 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:06.131417 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:07.412038 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:09.972262 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:15.093027 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:25.333934 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:34.261871 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 11:48:45.814356 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:48:51.214607 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 11:48:51.857557 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:51.862790 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:51.873072 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:51.893373 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:51.933670 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:52.014009 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:52.174432 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:52.494933 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:53.135414 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:54.416277 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:48:56.977384 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:49:02.098313 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:49:03.992720 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:49:12.339232 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:49:26.775405 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:49:32.819829 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:50:13.780901 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:50:48.696563 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:51:16.696383 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:51:35.701903 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 11:53:04.852538 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:53:32.536729 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
E0115 11:53:51.215110 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 11:53:51.857389 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-812907 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m52.860829458s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-812907 -n default-k8s-diff-port-812907
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nlbd7" [30b0b392-bcb3-4ca0-b929-bf6f61e90729] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nlbd7" [30b0b392-bcb3-4ca0-b929-bf6f61e90729] Running
E0115 11:54:03.992723 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.003613508s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nlbd7" [30b0b392-bcb3-4ca0-b929-bf6f61e90729] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003476334s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-812907 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-812907 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-812907 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-812907 -n default-k8s-diff-port-812907
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-812907 -n default-k8s-diff-port-812907: exit status 2 (391.932054ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-812907 -n default-k8s-diff-port-812907
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-812907 -n default-k8s-diff-port-812907: exit status 2 (381.155425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-812907 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-812907 -n default-k8s-diff-port-812907
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-812907 -n default-k8s-diff-port-812907
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-952751 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-952751 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (43.571051991s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-952751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-952751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.237530254s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-952751 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-952751 --alsologtostderr -v=3: (1.285135185s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-952751 -n newest-cni-952751
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-952751 -n newest-cni-952751: exit status 7 (100.89844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-952751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-952751 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-952751 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (30.88735125s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-952751 -n newest-cni-952751
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-952751 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-952751 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-952751 -n newest-cni-952751
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-952751 -n newest-cni-952751: exit status 2 (380.03383ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-952751 -n newest-cni-952751
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-952751 -n newest-cni-952751: exit status 2 (369.859351ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-952751 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-952751 -n newest-cni-952751
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-952751 -n newest-cni-952751
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0115 11:55:59.741651 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
E0115 11:56:16.696664 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m18.829319914s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-165703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-165703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dtpp2" [b268c83b-4d0f-4dad-bbe7-5ab2b544e18c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dtpp2" [b268c83b-4d0f-4dad-bbe7-5ab2b544e18c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003739201s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-165703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0115 11:57:37.456819 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:37.462092 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:37.472341 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:37.492579 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:37.533550 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:37.614018 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:37.774403 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:38.094666 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:38.735378 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:40.016000 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:42.577424 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:57:47.698397 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m17.133600376s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-twz2j" [5010f4e6-e9a8-4a4b-8396-596b19cd5f8a] Running
E0115 11:57:57.938870 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004764359s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-twz2j" [5010f4e6-e9a8-4a4b-8396-596b19cd5f8a] Running
E0115 11:58:04.852509 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/old-k8s-version-064981/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004305015s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-848307 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-848307 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-848307 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-848307 --alsologtostderr -v=1: (1.113551452s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-848307 -n embed-certs-848307
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-848307 -n embed-certs-848307: exit status 2 (425.336731ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-848307 -n embed-certs-848307
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-848307 -n embed-certs-848307: exit status 2 (396.004962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-848307 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-848307 -n embed-certs-848307
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-848307 -n embed-certs-848307
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.89s)
E0115 12:03:25.381957 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/auto-165703/client.crt: no such file or directory
E0115 12:03:51.215055 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 12:03:51.857821 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
E0115 12:03:54.183263 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:54.188673 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:54.198918 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:54.219190 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:54.259519 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:54.339838 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:54.500196 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:54.820785 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:55.461582 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:56.741954 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
E0115 12:03:59.302317 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0115 11:58:18.419360 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
E0115 11:58:47.036733 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 11:58:51.214372 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/addons-944407/client.crt: no such file or directory
E0115 11:58:51.858359 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/no-preload-465205/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m18.32604441s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-b2g5g" [3fa4c3c7-b4bf-451c-be27-34640efbdbe0] Running
E0115 11:58:59.379938 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.027272545s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-165703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-165703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h9tdw" [2aa65fbd-89f8-465c-9d84-a9c431d95998] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 11:59:03.993407 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-h9tdw" [2aa65fbd-89f8-465c-9d84-a9c431d95998] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004842605s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-165703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rpbbg" [1f381221-3602-49d0-81ac-34cf2163ad4f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005870112s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.835721661s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-165703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-165703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ft9ft" [86c74f8e-8ae1-4492-82e7-e14638fae6d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ft9ft" [86c74f8e-8ae1-4492-82e7-e14638fae6d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005153482s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-165703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m30.693690265s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-165703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-165703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v4xbb" [f4020ebb-3e5b-4202-ad87-a00dc919e1bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v4xbb" [f4020ebb-3e5b-4202-ad87-a00dc919e1bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004223732s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-165703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0115 12:01:16.696664 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/functional-641147/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.749550912s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-165703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-165703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-48xrn" [eccc525f-40b7-40e2-8753-4d84bb7bcb7e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-48xrn" [eccc525f-40b7-40e2-8753-4d84bb7bcb7e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00459743s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-165703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bqnc2" [2b31ff28-f565-4e0e-92c5-80a400542f98] Running
E0115 12:02:23.940088 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/auto-165703/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004681305s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-165703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m34.231377992s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-165703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-165703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nbmm2" [817f1e18-184e-4486-b7d9-e8cb95ef4ec3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nbmm2" [817f1e18-184e-4486-b7d9-e8cb95ef4ec3] Running
E0115 12:02:37.457449 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/default-k8s-diff-port-812907/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005839757s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-165703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-165703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-165703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p54th" [f8fe28b6-02b6-415e-9d1f-a178c500af1a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 12:04:03.992808 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/ingress-addon-legacy-406064/client.crt: no such file or directory
E0115 12:04:04.422893 1630435 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kindnet-165703/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-p54th" [f8fe28b6-02b6-415e-9d1f-a178c500af1a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004134361s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-165703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-165703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (32/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-693615 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-693615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-693615
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-343737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-343737
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-165703 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-165703" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 11:29:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-413698
contexts:
- context:
cluster: kubernetes-upgrade-413698
user: kubernetes-upgrade-413698
name: kubernetes-upgrade-413698
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-413698
user:
client-certificate: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kubernetes-upgrade-413698/client.crt
client-key: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kubernetes-upgrade-413698/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-165703

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165703"

                                                
                                                
----------------------- debugLogs end: kubenet-165703 [took: 5.08633081s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-165703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-165703
--- SKIP: TestNetworkPlugins/group/kubenet (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-165703 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-165703" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17953-1625104/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 15 Jan 2024 11:29:23 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-413698
contexts:
- context:
cluster: kubernetes-upgrade-413698
user: kubernetes-upgrade-413698
name: kubernetes-upgrade-413698
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-413698
user:
client-certificate: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kubernetes-upgrade-413698/client.crt
client-key: /home/jenkins/minikube-integration/17953-1625104/.minikube/profiles/kubernetes-upgrade-413698/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-165703

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-165703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165703"

                                                
                                                
----------------------- debugLogs end: cilium-165703 [took: 6.565653013s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-165703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-165703
--- SKIP: TestNetworkPlugins/group/cilium (6.85s)

                                                
                                    
Copied to clipboard