Test Report: Docker_Linux_crio_arm64 17967

                    
                      10ecd0aeb1ec35670d13066c60edb6e287060cba:2024-01-16:32725
                    
                

Test fail (3/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 167.34
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 174.32
221 TestMultiNode/serial/PingHostFrom2Pods 4.11
x
+
TestAddons/parallel/Ingress (167.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-005301 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-005301 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-005301 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8c5d0e3b-f57b-4b97-880e-ab74e86eea74] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8c5d0e3b-f57b-4b97-880e-ab74e86eea74] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004017103s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-005301 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.127785596s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-005301 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.045415846s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-005301 addons disable ingress-dns --alsologtostderr -v=1: (1.222652084s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-005301 addons disable ingress --alsologtostderr -v=1: (7.791230327s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-005301
helpers_test.go:235: (dbg) docker inspect addons-005301:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e892c3c95eccaaf0a5a6dbe4ddae6a9012e825ea4b022193b943e5c59ec5ae4e",
	        "Created": "2024-01-16T03:20:58.128424271Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 725894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T03:20:58.461693338Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/e892c3c95eccaaf0a5a6dbe4ddae6a9012e825ea4b022193b943e5c59ec5ae4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e892c3c95eccaaf0a5a6dbe4ddae6a9012e825ea4b022193b943e5c59ec5ae4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/e892c3c95eccaaf0a5a6dbe4ddae6a9012e825ea4b022193b943e5c59ec5ae4e/hosts",
	        "LogPath": "/var/lib/docker/containers/e892c3c95eccaaf0a5a6dbe4ddae6a9012e825ea4b022193b943e5c59ec5ae4e/e892c3c95eccaaf0a5a6dbe4ddae6a9012e825ea4b022193b943e5c59ec5ae4e-json.log",
	        "Name": "/addons-005301",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-005301:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-005301",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/205bc4d67c99973ea0ebaa7702d5100680b7efb2eea860813f555bdb41eba36a-init/diff:/var/lib/docker/overlay2/a206f4642a9a6aaf26e75b007cd03505dc1586f0041014295f47d8b249463698/diff",
	                "MergedDir": "/var/lib/docker/overlay2/205bc4d67c99973ea0ebaa7702d5100680b7efb2eea860813f555bdb41eba36a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/205bc4d67c99973ea0ebaa7702d5100680b7efb2eea860813f555bdb41eba36a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/205bc4d67c99973ea0ebaa7702d5100680b7efb2eea860813f555bdb41eba36a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-005301",
	                "Source": "/var/lib/docker/volumes/addons-005301/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-005301",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-005301",
	                "name.minikube.sigs.k8s.io": "addons-005301",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2097a3d8b0739bd1cd1af134263ef638bcf7199f9c604dd92a0ebe2b254fbd4e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2097a3d8b073",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-005301": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e892c3c95ecc",
	                        "addons-005301"
	                    ],
	                    "NetworkID": "7a0af7a8bb8bfc0fe1ead64e3a522a1a3ed4a8b5c4704bd42a41eeed64249d9a",
	                    "EndpointID": "3d043cdc3537501905bee9d70bd91d12f1c0fba709811eb821642b2083401472",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-005301 -n addons-005301
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-005301 logs -n 25: (1.578688396s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-611919                                                                     | download-only-611919   | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| delete  | -p download-only-389954                                                                     | download-only-389954   | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| delete  | -p download-only-860274                                                                     | download-only-860274   | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| start   | --download-only -p                                                                          | download-docker-116937 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |                     |
	|         | download-docker-116937                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-116937                                                                   | download-docker-116937 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-260234   | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |                     |
	|         | binary-mirror-260234                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36735                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-260234                                                                     | binary-mirror-260234   | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| addons  | enable dashboard -p                                                                         | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |                     |
	|         | addons-005301                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |                     |
	|         | addons-005301                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-005301 --wait=true                                                                | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:23 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-005301 ip                                                                            | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:23 UTC | 16 Jan 24 03:23 UTC |
	| addons  | addons-005301 addons disable                                                                | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:23 UTC | 16 Jan 24 03:23 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-005301 addons                                                                        | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:23 UTC | 16 Jan 24 03:23 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:23 UTC | 16 Jan 24 03:23 UTC |
	|         | addons-005301                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-005301 ssh curl -s                                                                   | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:24 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-005301 addons                                                                        | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:24 UTC | 16 Jan 24 03:24 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-005301 addons                                                                        | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:24 UTC | 16 Jan 24 03:24 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:24 UTC | 16 Jan 24 03:24 UTC |
	|         | -p addons-005301                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-005301 ssh cat                                                                       | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:24 UTC | 16 Jan 24 03:24 UTC |
	|         | /opt/local-path-provisioner/pvc-f4595d61-448d-4d40-8aad-005cd4aa97ec_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-005301 addons disable                                                                | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:24 UTC | 16 Jan 24 03:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:25 UTC | 16 Jan 24 03:25 UTC |
	|         | addons-005301                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:25 UTC | 16 Jan 24 03:25 UTC |
	|         | -p addons-005301                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-005301 ip                                                                            | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:26 UTC | 16 Jan 24 03:26 UTC |
	| addons  | addons-005301 addons disable                                                                | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:26 UTC | 16 Jan 24 03:26 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-005301 addons disable                                                                | addons-005301          | jenkins | v1.32.0 | 16 Jan 24 03:26 UTC | 16 Jan 24 03:26 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:20:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:20:34.686265  725437 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:20:34.686471  725437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:20:34.686500  725437 out.go:309] Setting ErrFile to fd 2...
	I0116 03:20:34.686520  725437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:20:34.686814  725437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 03:20:34.687304  725437 out.go:303] Setting JSON to false
	I0116 03:20:34.688163  725437 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10984,"bootTime":1705364251,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0116 03:20:34.688259  725437 start.go:138] virtualization:  
	I0116 03:20:34.691198  725437 out.go:177] * [addons-005301] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:20:34.694218  725437 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:20:34.697008  725437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:20:34.694319  725437 notify.go:220] Checking for updates...
	I0116 03:20:34.699674  725437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:20:34.702041  725437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	I0116 03:20:34.704246  725437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 03:20:34.706726  725437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:20:34.709006  725437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:20:34.731922  725437 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:20:34.732040  725437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:20:34.808003  725437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-16 03:20:34.79883607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:20:34.808138  725437 docker.go:295] overlay module found
	I0116 03:20:34.811741  725437 out.go:177] * Using the docker driver based on user configuration
	I0116 03:20:34.814477  725437 start.go:298] selected driver: docker
	I0116 03:20:34.814504  725437 start.go:902] validating driver "docker" against <nil>
	I0116 03:20:34.814524  725437 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:20:34.815111  725437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:20:34.894497  725437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-16 03:20:34.88587009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:20:34.894640  725437 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:20:34.894949  725437 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:20:34.896907  725437 out.go:177] * Using Docker driver with root privileges
	I0116 03:20:34.898837  725437 cni.go:84] Creating CNI manager for ""
	I0116 03:20:34.898858  725437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 03:20:34.898871  725437 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 03:20:34.898880  725437 start_flags.go:321] config:
	{Name:addons-005301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-005301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:20:34.901386  725437 out.go:177] * Starting control plane node addons-005301 in cluster addons-005301
	I0116 03:20:34.903257  725437 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 03:20:34.905420  725437 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 03:20:34.907428  725437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:20:34.907487  725437 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0116 03:20:34.907497  725437 cache.go:56] Caching tarball of preloaded images
	I0116 03:20:34.907527  725437 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 03:20:34.907581  725437 preload.go:174] Found /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0116 03:20:34.907591  725437 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:20:34.907936  725437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/config.json ...
	I0116 03:20:34.907964  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/config.json: {Name:mk0f63e940902d7a1111e5519905925c882e2d38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:20:34.923572  725437 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 03:20:34.923734  725437 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 03:20:34.923755  725437 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 03:20:34.923760  725437 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 03:20:34.923768  725437 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 03:20:34.923773  725437 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0116 03:20:50.339987  725437 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0116 03:20:50.340027  725437 cache.go:194] Successfully downloaded all kic artifacts
	I0116 03:20:50.340120  725437 start.go:365] acquiring machines lock for addons-005301: {Name:mke89d2b0a1eabf1638d942bf85232b93a05af07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:20:50.340579  725437 start.go:369] acquired machines lock for "addons-005301" in 435.983µs
	I0116 03:20:50.340612  725437 start.go:93] Provisioning new machine with config: &{Name:addons-005301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-005301 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:20:50.340690  725437 start.go:125] createHost starting for "" (driver="docker")
	I0116 03:20:50.343109  725437 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0116 03:20:50.343349  725437 start.go:159] libmachine.API.Create for "addons-005301" (driver="docker")
	I0116 03:20:50.343380  725437 client.go:168] LocalClient.Create starting
	I0116 03:20:50.343513  725437 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem
	I0116 03:20:51.037535  725437 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem
	I0116 03:20:51.504266  725437 cli_runner.go:164] Run: docker network inspect addons-005301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 03:20:51.526990  725437 cli_runner.go:211] docker network inspect addons-005301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 03:20:51.527075  725437 network_create.go:281] running [docker network inspect addons-005301] to gather additional debugging logs...
	I0116 03:20:51.527097  725437 cli_runner.go:164] Run: docker network inspect addons-005301
	W0116 03:20:51.546512  725437 cli_runner.go:211] docker network inspect addons-005301 returned with exit code 1
	I0116 03:20:51.546544  725437 network_create.go:284] error running [docker network inspect addons-005301]: docker network inspect addons-005301: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-005301 not found
	I0116 03:20:51.546557  725437 network_create.go:286] output of [docker network inspect addons-005301]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-005301 not found
	
	** /stderr **
	I0116 03:20:51.546647  725437 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:20:51.564495  725437 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400075bb30}
	I0116 03:20:51.564532  725437 network_create.go:124] attempt to create docker network addons-005301 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0116 03:20:51.564590  725437 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-005301 addons-005301
	I0116 03:20:51.639772  725437 network_create.go:108] docker network addons-005301 192.168.49.0/24 created
	I0116 03:20:51.639804  725437 kic.go:121] calculated static IP "192.168.49.2" for the "addons-005301" container
	I0116 03:20:51.639877  725437 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 03:20:51.657045  725437 cli_runner.go:164] Run: docker volume create addons-005301 --label name.minikube.sigs.k8s.io=addons-005301 --label created_by.minikube.sigs.k8s.io=true
	I0116 03:20:51.675133  725437 oci.go:103] Successfully created a docker volume addons-005301
	I0116 03:20:51.675230  725437 cli_runner.go:164] Run: docker run --rm --name addons-005301-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-005301 --entrypoint /usr/bin/test -v addons-005301:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 03:20:53.817934  725437 cli_runner.go:217] Completed: docker run --rm --name addons-005301-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-005301 --entrypoint /usr/bin/test -v addons-005301:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (2.142654039s)
	I0116 03:20:53.817963  725437 oci.go:107] Successfully prepared a docker volume addons-005301
	I0116 03:20:53.817989  725437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:20:53.818007  725437 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 03:20:53.818096  725437 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-005301:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 03:20:58.047840  725437 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-005301:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.229706667s)
	I0116 03:20:58.047871  725437 kic.go:203] duration metric: took 4.229862 seconds to extract preloaded images to volume
	W0116 03:20:58.048017  725437 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 03:20:58.048147  725437 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 03:20:58.113374  725437 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-005301 --name addons-005301 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-005301 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-005301 --network addons-005301 --ip 192.168.49.2 --volume addons-005301:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 03:20:58.471853  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Running}}
	I0116 03:20:58.494841  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:20:58.514806  725437 cli_runner.go:164] Run: docker exec addons-005301 stat /var/lib/dpkg/alternatives/iptables
	I0116 03:20:58.586461  725437 oci.go:144] the created container "addons-005301" has a running status.
	I0116 03:20:58.586486  725437 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa...
	I0116 03:20:58.754137  725437 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 03:20:58.785634  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:20:58.806373  725437 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 03:20:58.806399  725437 kic_runner.go:114] Args: [docker exec --privileged addons-005301 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 03:20:58.887267  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:20:58.912078  725437 machine.go:88] provisioning docker machine ...
	I0116 03:20:58.912108  725437 ubuntu.go:169] provisioning hostname "addons-005301"
	I0116 03:20:58.912178  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:20:58.940588  725437 main.go:141] libmachine: Using SSH client type: native
	I0116 03:20:58.940994  725437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	I0116 03:20:58.941006  725437 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-005301 && echo "addons-005301" | sudo tee /etc/hostname
	I0116 03:20:58.941741  725437 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0116 03:21:02.094305  725437 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-005301
	
	I0116 03:21:02.094403  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:02.114991  725437 main.go:141] libmachine: Using SSH client type: native
	I0116 03:21:02.115391  725437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	I0116 03:21:02.115412  725437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-005301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-005301/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-005301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:21:02.249063  725437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:21:02.249092  725437 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17967-719286/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-719286/.minikube}
	I0116 03:21:02.249129  725437 ubuntu.go:177] setting up certificates
	I0116 03:21:02.249141  725437 provision.go:83] configureAuth start
	I0116 03:21:02.249201  725437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-005301
	I0116 03:21:02.267710  725437 provision.go:138] copyHostCerts
	I0116 03:21:02.267782  725437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem (1082 bytes)
	I0116 03:21:02.267908  725437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem (1123 bytes)
	I0116 03:21:02.267968  725437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem (1675 bytes)
	I0116 03:21:02.268014  725437 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem org=jenkins.addons-005301 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-005301]
	I0116 03:21:02.917300  725437 provision.go:172] copyRemoteCerts
	I0116 03:21:02.917382  725437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:21:02.917435  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:02.935446  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:03.039295  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 03:21:03.068324  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:21:03.097598  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:21:03.126759  725437 provision.go:86] duration metric: configureAuth took 877.605037ms
	I0116 03:21:03.126788  725437 ubuntu.go:193] setting minikube options for container-runtime
	I0116 03:21:03.127007  725437 config.go:182] Loaded profile config "addons-005301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:21:03.127118  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:03.145390  725437 main.go:141] libmachine: Using SSH client type: native
	I0116 03:21:03.145800  725437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33482 <nil> <nil>}
	I0116 03:21:03.145819  725437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:21:03.392389  725437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:21:03.392413  725437 machine.go:91] provisioned docker machine in 4.480315533s
	I0116 03:21:03.392423  725437 client.go:171] LocalClient.Create took 13.049036489s
	I0116 03:21:03.392436  725437 start.go:167] duration metric: libmachine.API.Create for "addons-005301" took 13.049087985s
	I0116 03:21:03.392444  725437 start.go:300] post-start starting for "addons-005301" (driver="docker")
	I0116 03:21:03.392457  725437 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:21:03.392534  725437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:21:03.392582  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:03.411825  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:03.511153  725437 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:21:03.515269  725437 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 03:21:03.515348  725437 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 03:21:03.515368  725437 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 03:21:03.515377  725437 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 03:21:03.515387  725437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-719286/.minikube/addons for local assets ...
	I0116 03:21:03.515448  725437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-719286/.minikube/files for local assets ...
	I0116 03:21:03.515476  725437 start.go:303] post-start completed in 123.024018ms
	I0116 03:21:03.515774  725437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-005301
	I0116 03:21:03.533926  725437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/config.json ...
	I0116 03:21:03.534199  725437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:21:03.534254  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:03.551209  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:03.645925  725437 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 03:21:03.651220  725437 start.go:128] duration metric: createHost completed in 13.31051275s
	I0116 03:21:03.651241  725437 start.go:83] releasing machines lock for "addons-005301", held for 13.310647028s
	I0116 03:21:03.651316  725437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-005301
	I0116 03:21:03.668618  725437 ssh_runner.go:195] Run: cat /version.json
	I0116 03:21:03.668662  725437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:21:03.668670  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:03.668730  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:03.686951  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:03.701706  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:03.784225  725437 ssh_runner.go:195] Run: systemctl --version
	I0116 03:21:03.920108  725437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:21:04.066099  725437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:21:04.071432  725437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:21:04.094418  725437 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 03:21:04.094493  725437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:21:04.129983  725437 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 03:21:04.130004  725437 start.go:475] detecting cgroup driver to use...
	I0116 03:21:04.130035  725437 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 03:21:04.130090  725437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:21:04.147506  725437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:21:04.160418  725437 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:21:04.160485  725437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:21:04.175697  725437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:21:04.192042  725437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:21:04.282186  725437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:21:04.385179  725437 docker.go:233] disabling docker service ...
	I0116 03:21:04.385242  725437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:21:04.406769  725437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:21:04.420427  725437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:21:04.517655  725437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:21:04.611402  725437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:21:04.625039  725437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:21:04.643467  725437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:21:04.643548  725437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:21:04.655188  725437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:21:04.655277  725437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:21:04.667201  725437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:21:04.678867  725437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:21:04.691164  725437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:21:04.701775  725437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:21:04.711620  725437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:21:04.721314  725437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:21:04.813203  725437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:21:04.930598  725437 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:21:04.930683  725437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:21:04.935259  725437 start.go:543] Will wait 60s for crictl version
	I0116 03:21:04.935326  725437 ssh_runner.go:195] Run: which crictl
	I0116 03:21:04.939556  725437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:21:04.981203  725437 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 03:21:04.981371  725437 ssh_runner.go:195] Run: crio --version
	I0116 03:21:05.028929  725437 ssh_runner.go:195] Run: crio --version
	I0116 03:21:05.073507  725437 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0116 03:21:05.075603  725437 cli_runner.go:164] Run: docker network inspect addons-005301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:21:05.093139  725437 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0116 03:21:05.097801  725437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:21:05.111053  725437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:21:05.111117  725437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:21:05.183640  725437 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:21:05.183661  725437 crio.go:415] Images already preloaded, skipping extraction
	I0116 03:21:05.183721  725437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:21:05.223025  725437 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:21:05.223046  725437 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:21:05.223118  725437 ssh_runner.go:195] Run: crio config
	I0116 03:21:05.278276  725437 cni.go:84] Creating CNI manager for ""
	I0116 03:21:05.278302  725437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 03:21:05.278363  725437 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:21:05.278391  725437 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-005301 NodeName:addons-005301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:21:05.278554  725437 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-005301"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:21:05.278617  725437 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-005301 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-005301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:21:05.278681  725437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:21:05.288968  725437 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:21:05.289060  725437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:21:05.298853  725437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0116 03:21:05.319080  725437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:21:05.339020  725437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0116 03:21:05.358682  725437 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0116 03:21:05.362744  725437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:21:05.374961  725437 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301 for IP: 192.168.49.2
	I0116 03:21:05.375023  725437 certs.go:190] acquiring lock for shared ca certs: {Name:mkc1cd6c1048e37282c341d17731487c267a60dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:05.375507  725437 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key
	I0116 03:21:05.662218  725437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt ...
	I0116 03:21:05.662248  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt: {Name:mk67486ce697eb7d5c15ab1f2c8b4b0f62f15c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:05.662835  725437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key ...
	I0116 03:21:05.662849  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key: {Name:mk9da738570b5f0bb5febb88ca7d237d8f146895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:05.662949  725437 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key
	I0116 03:21:05.850646  725437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.crt ...
	I0116 03:21:05.850673  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.crt: {Name:mk25aa284a1763eafc2c7f24d3439dc30d08be05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:05.850842  725437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key ...
	I0116 03:21:05.850854  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key: {Name:mkfb2cff1b212de75f889d7cbe7fc220603752d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:05.851633  725437 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.key
	I0116 03:21:05.851651  725437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt with IP's: []
	I0116 03:21:06.115704  725437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt ...
	I0116 03:21:06.115735  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: {Name:mk9f1376004e7771fdf5f01a54bd2cc91f19f692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:06.115918  725437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.key ...
	I0116 03:21:06.115930  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.key: {Name:mkae476b64e2ccf4cedb3899f23a557e1f247dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:06.116523  725437 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.key.dd3b5fb2
	I0116 03:21:06.116550  725437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 03:21:06.612598  725437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.crt.dd3b5fb2 ...
	I0116 03:21:06.612636  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.crt.dd3b5fb2: {Name:mk72298090a5e5f41fe90aa3dfa8665881ade4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:06.613284  725437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.key.dd3b5fb2 ...
	I0116 03:21:06.613302  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.key.dd3b5fb2: {Name:mk50ca0b02c3639e327cd0bd449b702e03c8c0be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:06.613873  725437 certs.go:337] copying /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.crt
	I0116 03:21:06.613955  725437 certs.go:341] copying /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.key
	I0116 03:21:06.614004  725437 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/proxy-client.key
	I0116 03:21:06.614023  725437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/proxy-client.crt with IP's: []
	I0116 03:21:06.912875  725437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/proxy-client.crt ...
	I0116 03:21:06.912905  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/proxy-client.crt: {Name:mk186fe17d43744c09951f828141bbd7508d9ebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:06.913081  725437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/proxy-client.key ...
	I0116 03:21:06.913093  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/proxy-client.key: {Name:mkcce99c9603f81e28562b00fa667449daac3c41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:06.913274  725437 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:21:06.913317  725437 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:21:06.913346  725437 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:21:06.913375  725437 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem (1675 bytes)
	I0116 03:21:06.914045  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:21:06.941920  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:21:06.968906  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:21:06.996413  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:21:07.024171  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:21:07.051622  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 03:21:07.079092  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:21:07.105887  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 03:21:07.132509  725437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:21:07.160420  725437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:21:07.180813  725437 ssh_runner.go:195] Run: openssl version
	I0116 03:21:07.187494  725437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:21:07.199147  725437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:21:07.203585  725437 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 03:21 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:21:07.203660  725437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:21:07.212237  725437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:21:07.224059  725437 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:21:07.228481  725437 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:21:07.228561  725437 kubeadm.go:404] StartCluster: {Name:addons-005301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-005301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:21:07.228638  725437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:21:07.228692  725437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:21:07.273538  725437 cri.go:89] found id: ""
	I0116 03:21:07.273605  725437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:21:07.284447  725437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:21:07.294995  725437 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 03:21:07.295055  725437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:21:07.305514  725437 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:21:07.305569  725437 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 03:21:07.360640  725437 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:21:07.360870  725437 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:21:07.405460  725437 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 03:21:07.405538  725437 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0116 03:21:07.405587  725437 kubeadm.go:322] OS: Linux
	I0116 03:21:07.405656  725437 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 03:21:07.405726  725437 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 03:21:07.405786  725437 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 03:21:07.405849  725437 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 03:21:07.405924  725437 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 03:21:07.405991  725437 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 03:21:07.406049  725437 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0116 03:21:07.406113  725437 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0116 03:21:07.406173  725437 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0116 03:21:07.493882  725437 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:21:07.494051  725437 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:21:07.494180  725437 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:21:07.741678  725437 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:21:07.745675  725437 out.go:204]   - Generating certificates and keys ...
	I0116 03:21:07.745808  725437 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:21:07.745913  725437 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:21:07.907954  725437 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 03:21:08.397785  725437 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 03:21:09.018914  725437 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 03:21:09.745376  725437 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 03:21:10.456460  725437 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 03:21:10.456765  725437 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-005301 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 03:21:10.698251  725437 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 03:21:10.698588  725437 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-005301 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 03:21:10.916145  725437 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 03:21:11.433621  725437 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 03:21:11.780530  725437 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 03:21:11.780775  725437 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:21:11.990048  725437 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:21:12.876918  725437 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:21:13.086003  725437 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:21:13.584029  725437 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:21:13.584726  725437 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:21:13.587330  725437 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:21:13.589867  725437 out.go:204]   - Booting up control plane ...
	I0116 03:21:13.589962  725437 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:21:13.590040  725437 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:21:13.590504  725437 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:21:13.602691  725437 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:21:13.603772  725437 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:21:13.603988  725437 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:21:13.705445  725437 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:21:20.207932  725437 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502903 seconds
	I0116 03:21:20.208047  725437 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:21:20.222324  725437 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:21:20.753649  725437 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:21:20.753835  725437 kubeadm.go:322] [mark-control-plane] Marking the node addons-005301 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:21:21.265969  725437 kubeadm.go:322] [bootstrap-token] Using token: ves0yb.n4ivktr83pc8ia60
	I0116 03:21:21.268694  725437 out.go:204]   - Configuring RBAC rules ...
	I0116 03:21:21.268810  725437 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:21:21.274462  725437 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:21:21.283842  725437 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:21:21.287737  725437 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:21:21.291245  725437 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:21:21.294655  725437 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:21:21.307660  725437 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:21:21.547377  725437 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:21:21.684453  725437 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:21:21.688124  725437 kubeadm.go:322] 
	I0116 03:21:21.688200  725437 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:21:21.688206  725437 kubeadm.go:322] 
	I0116 03:21:21.688286  725437 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:21:21.688291  725437 kubeadm.go:322] 
	I0116 03:21:21.688330  725437 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:21:21.688392  725437 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:21:21.688443  725437 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:21:21.688448  725437 kubeadm.go:322] 
	I0116 03:21:21.688511  725437 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:21:21.688516  725437 kubeadm.go:322] 
	I0116 03:21:21.688563  725437 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:21:21.688567  725437 kubeadm.go:322] 
	I0116 03:21:21.688620  725437 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:21:21.688700  725437 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:21:21.688767  725437 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:21:21.688772  725437 kubeadm.go:322] 
	I0116 03:21:21.688851  725437 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:21:21.688922  725437 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:21:21.688927  725437 kubeadm.go:322] 
	I0116 03:21:21.689015  725437 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ves0yb.n4ivktr83pc8ia60 \
	I0116 03:21:21.689150  725437 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 \
	I0116 03:21:21.689174  725437 kubeadm.go:322] 	--control-plane 
	I0116 03:21:21.689181  725437 kubeadm.go:322] 
	I0116 03:21:21.689261  725437 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:21:21.689265  725437 kubeadm.go:322] 
	I0116 03:21:21.689342  725437 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ves0yb.n4ivktr83pc8ia60 \
	I0116 03:21:21.689437  725437 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 
	I0116 03:21:21.690112  725437 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 03:21:21.690308  725437 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:21:21.690321  725437 cni.go:84] Creating CNI manager for ""
	I0116 03:21:21.690334  725437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 03:21:21.694023  725437 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 03:21:21.696133  725437 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:21:21.704367  725437 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 03:21:21.704437  725437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:21:21.770167  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:21:22.682291  725437 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:21:22.682402  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:22.682422  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=addons-005301 minikube.k8s.io/updated_at=2024_01_16T03_21_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:22.850338  725437 ops.go:34] apiserver oom_adj: -16
	I0116 03:21:22.850426  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:23.351150  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:23.851114  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:24.351296  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:24.851149  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:25.351291  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:25.851528  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:26.350791  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:26.850911  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:27.351333  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:27.850554  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:28.351168  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:28.850748  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:29.351058  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:29.851451  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:30.351038  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:30.850622  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:31.351045  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:31.851477  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:32.350928  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:32.850514  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:33.351109  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:33.850573  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:34.351284  725437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:21:34.460837  725437 kubeadm.go:1088] duration metric: took 11.778545457s to wait for elevateKubeSystemPrivileges.
	I0116 03:21:34.460866  725437 kubeadm.go:406] StartCluster complete in 27.23230941s
	I0116 03:21:34.460883  725437 settings.go:142] acquiring lock: {Name:mk09c1af0296e0da2e97c553b187ecf4aec5fda4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:34.460983  725437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:21:34.461387  725437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/kubeconfig: {Name:mk79a070d6b32850c1522eb5f09a1fb050b71442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:21:34.461561  725437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:21:34.461829  725437 config.go:182] Loaded profile config "addons-005301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:21:34.461978  725437 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 03:21:34.462068  725437 addons.go:69] Setting yakd=true in profile "addons-005301"
	I0116 03:21:34.462081  725437 addons.go:234] Setting addon yakd=true in "addons-005301"
	I0116 03:21:34.462116  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.462551  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.463038  725437 addons.go:69] Setting cloud-spanner=true in profile "addons-005301"
	I0116 03:21:34.463056  725437 addons.go:234] Setting addon cloud-spanner=true in "addons-005301"
	I0116 03:21:34.463095  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.463488  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.463830  725437 addons.go:69] Setting metrics-server=true in profile "addons-005301"
	I0116 03:21:34.463852  725437 addons.go:234] Setting addon metrics-server=true in "addons-005301"
	I0116 03:21:34.463883  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.464331  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.464632  725437 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-005301"
	I0116 03:21:34.464655  725437 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-005301"
	I0116 03:21:34.464689  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.465065  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.467556  725437 addons.go:69] Setting registry=true in profile "addons-005301"
	I0116 03:21:34.467576  725437 addons.go:234] Setting addon registry=true in "addons-005301"
	I0116 03:21:34.467619  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.468030  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.476363  725437 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-005301"
	I0116 03:21:34.476539  725437 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-005301"
	I0116 03:21:34.476656  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.478471  725437 addons.go:69] Setting storage-provisioner=true in profile "addons-005301"
	I0116 03:21:34.504710  725437 addons.go:234] Setting addon storage-provisioner=true in "addons-005301"
	I0116 03:21:34.478487  725437 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-005301"
	I0116 03:21:34.478505  725437 addons.go:69] Setting volumesnapshots=true in profile "addons-005301"
	I0116 03:21:34.478586  725437 addons.go:69] Setting default-storageclass=true in profile "addons-005301"
	I0116 03:21:34.478592  725437 addons.go:69] Setting gcp-auth=true in profile "addons-005301"
	I0116 03:21:34.478597  725437 addons.go:69] Setting ingress=true in profile "addons-005301"
	I0116 03:21:34.478605  725437 addons.go:69] Setting ingress-dns=true in profile "addons-005301"
	I0116 03:21:34.478612  725437 addons.go:69] Setting inspektor-gadget=true in profile "addons-005301"
	I0116 03:21:34.504955  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.505669  725437 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-005301"
	I0116 03:21:34.505991  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.517630  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.518208  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.521199  725437 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-005301"
	I0116 03:21:34.521448  725437 addons.go:234] Setting addon volumesnapshots=true in "addons-005301"
	I0116 03:21:34.521530  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.523008  725437 mustload.go:65] Loading cluster: addons-005301
	I0116 03:21:34.523186  725437 config.go:182] Loaded profile config "addons-005301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:21:34.523413  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.532942  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.540478  725437 addons.go:234] Setting addon ingress=true in "addons-005301"
	I0116 03:21:34.540583  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.541178  725437 addons.go:234] Setting addon ingress-dns=true in "addons-005301"
	I0116 03:21:34.541258  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.541717  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.566233  725437 addons.go:234] Setting addon inspektor-gadget=true in "addons-005301"
	I0116 03:21:34.566346  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.566877  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.568696  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.588477  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.619838  725437 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 03:21:34.622471  725437 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 03:21:34.622490  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 03:21:34.622552  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.694650  725437 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 03:21:34.698824  725437 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 03:21:34.698892  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 03:21:34.698985  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.710726  725437 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 03:21:34.713943  725437 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 03:21:34.713999  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 03:21:34.714089  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.716426  725437 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 03:21:34.718577  725437 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 03:21:34.726197  725437 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 03:21:34.726246  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 03:21:34.726332  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.759401  725437 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 03:21:34.812793  725437 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 03:21:34.812857  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 03:21:34.812962  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.819979  725437 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 03:21:34.823531  725437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 03:21:34.823552  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 03:21:34.823611  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.816698  725437 addons.go:234] Setting addon default-storageclass=true in "addons-005301"
	I0116 03:21:34.834251  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.834872  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.850864  725437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 03:21:34.853160  725437 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 03:21:34.859491  725437 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 03:21:34.864707  725437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 03:21:34.867384  725437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 03:21:34.871393  725437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 03:21:34.873939  725437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 03:21:34.859747  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.819927  725437 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-005301"
	I0116 03:21:34.889721  725437 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 03:21:34.889753  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:34.896167  725437 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:21:34.902322  725437 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:21:34.902343  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:21:34.902404  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.896732  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:34.896819  725437 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 03:21:34.920453  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:34.922416  725437 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 03:21:34.922528  725437 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 03:21:34.925658  725437 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 03:21:34.925837  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 03:21:34.927768  725437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 03:21:34.931485  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:34.938097  725437 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 03:21:34.938117  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 03:21:34.938136  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.938170  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.950578  725437 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 03:21:34.946833  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 03:21:34.957566  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.985398  725437 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 03:21:34.988017  725437 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 03:21:34.988043  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 03:21:34.988128  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:34.998212  725437 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:21:34.998243  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:21:34.998303  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:35.035230  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.061934  725437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:21:35.086412  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.096210  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.116584  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.132513  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.137785  725437 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 03:21:35.140300  725437 out.go:177]   - Using image docker.io/busybox:stable
	I0116 03:21:35.142781  725437 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 03:21:35.142798  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 03:21:35.142856  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:35.164975  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.205896  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.225082  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.230957  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.240816  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.244376  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:35.346282  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 03:21:35.360760  725437 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-005301" context rescaled to 1 replicas
	I0116 03:21:35.360798  725437 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:21:35.363138  725437 out.go:177] * Verifying Kubernetes components...
	I0116 03:21:35.365751  725437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:21:35.400894  725437 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 03:21:35.400964  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 03:21:35.530996  725437 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 03:21:35.531081  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 03:21:35.569618  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:21:35.602558  725437 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 03:21:35.602626  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 03:21:35.633990  725437 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 03:21:35.634061  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 03:21:35.718723  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 03:21:35.730852  725437 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 03:21:35.730928  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 03:21:35.742647  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 03:21:35.749693  725437 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 03:21:35.749755  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 03:21:35.769238  725437 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 03:21:35.769305  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 03:21:35.779717  725437 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 03:21:35.779796  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 03:21:35.787116  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 03:21:35.799292  725437 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 03:21:35.799350  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 03:21:35.814040  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 03:21:35.842652  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:21:35.854319  725437 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 03:21:35.854387  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 03:21:35.914447  725437 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 03:21:35.914513  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 03:21:35.919140  725437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 03:21:35.919207  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 03:21:35.954437  725437 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 03:21:35.954457  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 03:21:35.957452  725437 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 03:21:35.957470  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 03:21:35.974645  725437 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 03:21:35.974666  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 03:21:36.057506  725437 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 03:21:36.057583  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 03:21:36.083218  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 03:21:36.108434  725437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 03:21:36.108507  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 03:21:36.129641  725437 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 03:21:36.129710  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 03:21:36.134951  725437 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 03:21:36.135012  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 03:21:36.190941  725437 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:21:36.191011  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 03:21:36.249453  725437 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 03:21:36.249524  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 03:21:36.285077  725437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 03:21:36.285147  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 03:21:36.322498  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 03:21:36.325112  725437 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 03:21:36.325175  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 03:21:36.392613  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 03:21:36.417930  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 03:21:36.445283  725437 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 03:21:36.445354  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 03:21:36.499962  725437 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 03:21:36.500032  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 03:21:36.565340  725437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 03:21:36.565410  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 03:21:36.609710  725437 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 03:21:36.609779  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 03:21:36.691314  725437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 03:21:36.691387  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 03:21:36.700344  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 03:21:36.729667  725437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 03:21:36.729733  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 03:21:36.886292  725437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 03:21:36.886360  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 03:21:36.994471  725437 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 03:21:36.994540  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 03:21:37.051548  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 03:21:38.070076  725437 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.008107837s)
	I0116 03:21:38.070152  725437 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0116 03:21:39.480088  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.133721791s)
	I0116 03:21:39.480208  725437 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.114439857s)
	I0116 03:21:39.481187  725437 node_ready.go:35] waiting up to 6m0s for node "addons-005301" to be "Ready" ...
	I0116 03:21:40.241496  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.671794163s)
	I0116 03:21:40.896493  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.177690039s)
	I0116 03:21:40.896574  725437 addons.go:470] Verifying addon ingress=true in "addons-005301"
	I0116 03:21:40.898524  725437 out.go:177] * Verifying ingress addon...
	I0116 03:21:40.896750  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.154028303s)
	I0116 03:21:40.896780  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.109612013s)
	I0116 03:21:40.896917  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.082812324s)
	I0116 03:21:40.896941  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.054223336s)
	I0116 03:21:40.896989  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.813693315s)
	I0116 03:21:40.898856  725437 addons.go:470] Verifying addon registry=true in "addons-005301"
	I0116 03:21:40.897033  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.574452354s)
	I0116 03:21:40.902501  725437 out.go:177] * Verifying registry addon...
	I0116 03:21:40.897183  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.479180986s)
	I0116 03:21:40.897234  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.196828082s)
	I0116 03:21:40.897110  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.504433455s)
	I0116 03:21:40.904466  725437 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-005301 service yakd-dashboard -n yakd-dashboard
	
	W0116 03:21:40.904522  725437 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 03:21:40.907291  725437 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 03:21:40.907312  725437 addons.go:470] Verifying addon metrics-server=true in "addons-005301"
	I0116 03:21:40.909992  725437 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 03:21:40.910014  725437 retry.go:31] will retry after 325.207915ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 03:21:40.929282  725437 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 03:21:40.929342  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:40.929524  725437 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 03:21:40.929540  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0116 03:21:40.933446  725437 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0116 03:21:41.144160  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.092505254s)
	I0116 03:21:41.144198  725437 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-005301"
	I0116 03:21:41.148168  725437 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 03:21:41.151019  725437 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 03:21:41.165202  725437 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 03:21:41.165265  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:41.237529  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 03:21:41.421632  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:41.422667  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:41.502515  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:21:41.677396  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:41.941342  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:41.942590  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:42.185207  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:42.417421  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:42.419731  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:42.693907  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:42.913610  725437 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.6760368s)
	I0116 03:21:42.926902  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:42.928180  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:43.162024  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:43.361223  725437 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 03:21:43.361350  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:43.398328  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:43.417749  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:43.419131  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:43.600326  725437 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 03:21:43.658105  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:43.674347  725437 addons.go:234] Setting addon gcp-auth=true in "addons-005301"
	I0116 03:21:43.674441  725437 host.go:66] Checking if "addons-005301" exists ...
	I0116 03:21:43.674992  725437 cli_runner.go:164] Run: docker container inspect addons-005301 --format={{.State.Status}}
	I0116 03:21:43.700674  725437 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 03:21:43.700724  725437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-005301
	I0116 03:21:43.729779  725437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33482 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/addons-005301/id_rsa Username:docker}
	I0116 03:21:43.905562  725437 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 03:21:43.907680  725437 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 03:21:43.909523  725437 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 03:21:43.909542  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 03:21:43.925892  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:43.927319  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:43.966466  725437 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 03:21:43.966488  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 03:21:43.984938  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:21:43.993624  725437 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 03:21:43.993681  725437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 03:21:44.023344  725437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 03:21:44.156096  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:44.418479  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:44.428575  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:44.664393  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:44.934166  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:44.936631  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:45.022324  725437 addons.go:470] Verifying addon gcp-auth=true in "addons-005301"
	I0116 03:21:45.024756  725437 out.go:177] * Verifying gcp-auth addon...
	I0116 03:21:45.027941  725437 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 03:21:45.049634  725437 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 03:21:45.049709  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:45.157134  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:45.416914  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:45.418334  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:45.532572  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:45.656018  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:45.924387  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:45.926217  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:45.985138  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:21:46.032095  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:46.156522  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:46.419218  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:46.420024  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:46.532169  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:46.658407  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:46.916147  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:46.916637  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:47.033505  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:47.155507  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:47.415627  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:47.416522  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:47.531962  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:47.655446  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:47.914343  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:47.915780  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:47.985262  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:21:48.033678  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:48.155353  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:48.416768  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:48.418152  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:48.532657  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:48.656806  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:48.914642  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:48.915912  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:49.033690  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:49.155767  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:49.414627  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:49.416109  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:49.531748  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:49.658440  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:49.914520  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:49.915896  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:50.031844  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:50.156047  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:50.415569  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:50.415707  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:50.484909  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:21:50.531886  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:50.656798  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:50.914669  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:50.916057  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:51.031568  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:51.155747  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:51.413955  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:51.415930  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:51.532251  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:51.655765  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:51.914914  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:51.916630  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:52.032090  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:52.157596  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:52.414545  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:52.416937  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:52.484992  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:21:52.531827  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:52.655630  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:52.915684  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:52.916115  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:53.032269  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:53.155651  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:53.415436  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:53.415747  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:53.531962  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:53.655799  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:53.914946  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:53.917681  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:54.032281  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:54.155298  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:54.414642  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:54.416204  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:54.531354  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:54.655643  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:54.916689  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:54.916900  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:54.985388  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:21:55.032275  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:55.156376  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:55.415012  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:55.415514  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:55.531346  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:55.655061  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:55.915367  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:55.916666  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:56.031666  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:56.155851  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:56.415736  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:56.416408  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:56.531896  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:56.654998  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:56.915387  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:56.915964  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:57.031959  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:57.155588  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:57.414222  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:57.415587  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:57.484525  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:21:57.531866  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:57.658157  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:57.914554  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:57.917664  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:58.031969  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:58.155032  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:58.414338  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:58.416233  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:58.531772  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:58.655629  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:58.914934  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:58.915729  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:59.031889  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:59.155578  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:59.414469  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:59.415822  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:21:59.485015  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:21:59.531271  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:21:59.657051  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:21:59.914588  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:21:59.917131  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:00.032507  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:00.155978  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:00.415529  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:00.415994  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:00.532091  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:00.656053  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:00.915447  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:00.916239  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:01.031914  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:01.155007  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:01.414351  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:01.415495  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:01.532297  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:01.655190  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:01.915134  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:01.915866  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:01.985091  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:22:02.031614  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:02.155701  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:02.415026  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:02.416420  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:02.532291  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:02.656864  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:02.914483  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:02.915909  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:03.032050  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:03.155677  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:03.414704  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:03.418522  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:03.532686  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:03.660793  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:03.915065  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:03.916098  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:04.032023  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:04.169779  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:04.415099  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:04.415668  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:04.484573  725437 node_ready.go:58] node "addons-005301" has status "Ready":"False"
	I0116 03:22:04.531420  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:04.655188  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:04.914319  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:04.916938  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:05.031583  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:05.155712  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:05.414257  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:05.416447  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:05.531782  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:05.655654  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:05.914308  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:05.915915  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:06.007132  725437 node_ready.go:49] node "addons-005301" has status "Ready":"True"
	I0116 03:22:06.007211  725437 node_ready.go:38] duration metric: took 26.52596314s waiting for node "addons-005301" to be "Ready" ...
	I0116 03:22:06.007238  725437 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:22:06.046342  725437 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j6nhv" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:06.049891  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:06.162716  725437 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 03:22:06.162783  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:06.418881  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:06.424945  725437 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 03:22:06.425009  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:06.560740  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:06.674779  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:06.917985  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:06.921212  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:07.033667  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:07.061942  725437 pod_ready.go:92] pod "coredns-5dd5756b68-j6nhv" in "kube-system" namespace has status "Ready":"True"
	I0116 03:22:07.061968  725437 pod_ready.go:81] duration metric: took 1.015553298s waiting for pod "coredns-5dd5756b68-j6nhv" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.061986  725437 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-005301" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.073908  725437 pod_ready.go:92] pod "etcd-addons-005301" in "kube-system" namespace has status "Ready":"True"
	I0116 03:22:07.073932  725437 pod_ready.go:81] duration metric: took 11.938836ms waiting for pod "etcd-addons-005301" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.073946  725437 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-005301" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.086430  725437 pod_ready.go:92] pod "kube-apiserver-addons-005301" in "kube-system" namespace has status "Ready":"True"
	I0116 03:22:07.086454  725437 pod_ready.go:81] duration metric: took 12.49994ms waiting for pod "kube-apiserver-addons-005301" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.086470  725437 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-005301" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.091748  725437 pod_ready.go:92] pod "kube-controller-manager-addons-005301" in "kube-system" namespace has status "Ready":"True"
	I0116 03:22:07.091769  725437 pod_ready.go:81] duration metric: took 5.291628ms waiting for pod "kube-controller-manager-addons-005301" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.091781  725437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-824gf" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.156925  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:07.185191  725437 pod_ready.go:92] pod "kube-proxy-824gf" in "kube-system" namespace has status "Ready":"True"
	I0116 03:22:07.185219  725437 pod_ready.go:81] duration metric: took 93.430252ms waiting for pod "kube-proxy-824gf" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.185236  725437 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-005301" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.420399  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:07.421794  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:07.531531  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:07.585112  725437 pod_ready.go:92] pod "kube-scheduler-addons-005301" in "kube-system" namespace has status "Ready":"True"
	I0116 03:22:07.585138  725437 pod_ready.go:81] duration metric: took 399.893498ms waiting for pod "kube-scheduler-addons-005301" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.585150  725437 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-vmqb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:07.657892  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:07.916562  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:07.918663  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:08.031553  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:08.157621  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:08.416925  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:08.417842  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:08.532383  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:08.657935  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:08.920555  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:08.921686  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:09.032187  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:09.157603  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:09.417597  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:09.418767  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:09.532758  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:09.599271  725437 pod_ready.go:102] pod "metrics-server-7c66d45ddc-vmqb5" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:09.658259  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:09.914843  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:09.917808  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:10.046960  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:10.157680  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:10.429534  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:10.433546  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:10.543180  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:10.659488  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:10.920700  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:10.923773  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:11.035096  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:11.159307  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:11.420506  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:11.424006  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:11.532520  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:11.602191  725437 pod_ready.go:102] pod "metrics-server-7c66d45ddc-vmqb5" in "kube-system" namespace has status "Ready":"False"
	I0116 03:22:11.661030  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:11.932196  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:11.933361  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:12.032577  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:12.111451  725437 pod_ready.go:92] pod "metrics-server-7c66d45ddc-vmqb5" in "kube-system" namespace has status "Ready":"True"
	I0116 03:22:12.111488  725437 pod_ready.go:81] duration metric: took 4.526329254s waiting for pod "metrics-server-7c66d45ddc-vmqb5" in "kube-system" namespace to be "Ready" ...
	I0116 03:22:12.111507  725437 pod_ready.go:38] duration metric: took 6.104245196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:22:12.111527  725437 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:22:12.111587  725437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:22:12.152579  725437 api_server.go:72] duration metric: took 36.791751379s to wait for apiserver process to appear ...
	I0116 03:22:12.152597  725437 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:22:12.152615  725437 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0116 03:22:12.169447  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:12.206673  725437 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0116 03:22:12.208051  725437 api_server.go:141] control plane version: v1.28.4
	I0116 03:22:12.208129  725437 api_server.go:131] duration metric: took 55.523282ms to wait for apiserver health ...
	I0116 03:22:12.208154  725437 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:22:12.234230  725437 system_pods.go:59] 18 kube-system pods found
	I0116 03:22:12.234307  725437 system_pods.go:61] "coredns-5dd5756b68-j6nhv" [ac2061d3-ea8c-4e25-871f-7c79facc2873] Running
	I0116 03:22:12.234332  725437 system_pods.go:61] "csi-hostpath-attacher-0" [df15a54f-5af2-42a6-86de-53f82ed4f160] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 03:22:12.234353  725437 system_pods.go:61] "csi-hostpath-resizer-0" [e028f453-1e2f-4c77-8216-3d0905b95ba2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0116 03:22:12.234389  725437 system_pods.go:61] "csi-hostpathplugin-cf47r" [a1a59419-32de-4383-aae4-dcfdea85420c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 03:22:12.234413  725437 system_pods.go:61] "etcd-addons-005301" [f44e2791-2aa8-4e35-b284-bd0a0f39b772] Running
	I0116 03:22:12.234431  725437 system_pods.go:61] "kindnet-xgz86" [20d665a0-b27a-473c-9519-9e1d67744ba3] Running
	I0116 03:22:12.234446  725437 system_pods.go:61] "kube-apiserver-addons-005301" [c6fa7b73-cda3-43e0-bca1-e98bece23e74] Running
	I0116 03:22:12.234474  725437 system_pods.go:61] "kube-controller-manager-addons-005301" [06cc5b06-ffe7-4cea-a024-43df8d3a4d16] Running
	I0116 03:22:12.234498  725437 system_pods.go:61] "kube-ingress-dns-minikube" [b009930c-7cb7-4e4d-b211-15b10b7426d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 03:22:12.234514  725437 system_pods.go:61] "kube-proxy-824gf" [7b8f6b0f-2f30-4823-a03c-01c7da4f4b5c] Running
	I0116 03:22:12.234529  725437 system_pods.go:61] "kube-scheduler-addons-005301" [c6ec56b7-e166-4bb5-8236-4ca8df99ab87] Running
	I0116 03:22:12.234555  725437 system_pods.go:61] "metrics-server-7c66d45ddc-vmqb5" [9250ab40-a698-47db-824c-ce37ce1c5daf] Running
	I0116 03:22:12.234580  725437 system_pods.go:61] "nvidia-device-plugin-daemonset-8hr29" [5306d06e-94e9-4a23-a82d-32ab86b63e82] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0116 03:22:12.234599  725437 system_pods.go:61] "registry-jwfz4" [bc959312-0380-4457-89d3-7a3db8b1e928] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0116 03:22:12.234630  725437 system_pods.go:61] "registry-proxy-gwljk" [2edcc070-e7c4-41ce-92db-9bbb2abd69e5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 03:22:12.234655  725437 system_pods.go:61] "snapshot-controller-58dbcc7b99-27n96" [0ff9492e-1a0a-441d-a61b-3bd4b65d736e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 03:22:12.234676  725437 system_pods.go:61] "snapshot-controller-58dbcc7b99-m6fgp" [8846de13-8bcb-4d89-89d6-3ee0a4192b40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 03:22:12.234706  725437 system_pods.go:61] "storage-provisioner" [c2e18697-0f61-4d54-9f2b-98f4dd15f5fc] Running
	I0116 03:22:12.234729  725437 system_pods.go:74] duration metric: took 26.557955ms to wait for pod list to return data ...
	I0116 03:22:12.234749  725437 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:22:12.239301  725437 default_sa.go:45] found service account: "default"
	I0116 03:22:12.239322  725437 default_sa.go:55] duration metric: took 4.557255ms for default service account to be created ...
	I0116 03:22:12.239331  725437 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:22:12.258861  725437 system_pods.go:86] 18 kube-system pods found
	I0116 03:22:12.258935  725437 system_pods.go:89] "coredns-5dd5756b68-j6nhv" [ac2061d3-ea8c-4e25-871f-7c79facc2873] Running
	I0116 03:22:12.258960  725437 system_pods.go:89] "csi-hostpath-attacher-0" [df15a54f-5af2-42a6-86de-53f82ed4f160] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0116 03:22:12.258984  725437 system_pods.go:89] "csi-hostpath-resizer-0" [e028f453-1e2f-4c77-8216-3d0905b95ba2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0116 03:22:12.259019  725437 system_pods.go:89] "csi-hostpathplugin-cf47r" [a1a59419-32de-4383-aae4-dcfdea85420c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0116 03:22:12.259041  725437 system_pods.go:89] "etcd-addons-005301" [f44e2791-2aa8-4e35-b284-bd0a0f39b772] Running
	I0116 03:22:12.259058  725437 system_pods.go:89] "kindnet-xgz86" [20d665a0-b27a-473c-9519-9e1d67744ba3] Running
	I0116 03:22:12.259073  725437 system_pods.go:89] "kube-apiserver-addons-005301" [c6fa7b73-cda3-43e0-bca1-e98bece23e74] Running
	I0116 03:22:12.259102  725437 system_pods.go:89] "kube-controller-manager-addons-005301" [06cc5b06-ffe7-4cea-a024-43df8d3a4d16] Running
	I0116 03:22:12.259126  725437 system_pods.go:89] "kube-ingress-dns-minikube" [b009930c-7cb7-4e4d-b211-15b10b7426d5] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 03:22:12.259143  725437 system_pods.go:89] "kube-proxy-824gf" [7b8f6b0f-2f30-4823-a03c-01c7da4f4b5c] Running
	I0116 03:22:12.259175  725437 system_pods.go:89] "kube-scheduler-addons-005301" [c6ec56b7-e166-4bb5-8236-4ca8df99ab87] Running
	I0116 03:22:12.259198  725437 system_pods.go:89] "metrics-server-7c66d45ddc-vmqb5" [9250ab40-a698-47db-824c-ce37ce1c5daf] Running
	I0116 03:22:12.259219  725437 system_pods.go:89] "nvidia-device-plugin-daemonset-8hr29" [5306d06e-94e9-4a23-a82d-32ab86b63e82] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0116 03:22:12.259237  725437 system_pods.go:89] "registry-jwfz4" [bc959312-0380-4457-89d3-7a3db8b1e928] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0116 03:22:12.259272  725437 system_pods.go:89] "registry-proxy-gwljk" [2edcc070-e7c4-41ce-92db-9bbb2abd69e5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0116 03:22:12.259292  725437 system_pods.go:89] "snapshot-controller-58dbcc7b99-27n96" [0ff9492e-1a0a-441d-a61b-3bd4b65d736e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 03:22:12.259313  725437 system_pods.go:89] "snapshot-controller-58dbcc7b99-m6fgp" [8846de13-8bcb-4d89-89d6-3ee0a4192b40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0116 03:22:12.259342  725437 system_pods.go:89] "storage-provisioner" [c2e18697-0f61-4d54-9f2b-98f4dd15f5fc] Running
	I0116 03:22:12.259364  725437 system_pods.go:126] duration metric: took 20.027639ms to wait for k8s-apps to be running ...
	I0116 03:22:12.259382  725437 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:22:12.259458  725437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:22:12.285217  725437 system_svc.go:56] duration metric: took 25.816895ms WaitForService to wait for kubelet.
	I0116 03:22:12.285290  725437 kubeadm.go:581] duration metric: took 36.924459113s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:22:12.285323  725437 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:22:12.385639  725437 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 03:22:12.385670  725437 node_conditions.go:123] node cpu capacity is 2
	I0116 03:22:12.385683  725437 node_conditions.go:105] duration metric: took 100.344251ms to run NodePressure ...
	I0116 03:22:12.385713  725437 start.go:228] waiting for startup goroutines ...
	I0116 03:22:12.415462  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:12.418485  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:12.531360  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:12.657702  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:12.926599  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:12.927028  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:13.033569  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:13.158208  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:13.415166  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:13.417488  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:13.534654  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:13.658515  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:13.915283  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:13.917270  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:14.031935  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:14.157650  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:14.414787  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:14.416826  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:14.531259  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:14.662549  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:14.930900  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:14.945540  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:15.034392  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:15.156933  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:15.414855  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:15.417378  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:15.531356  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:15.656676  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:15.918060  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:15.919399  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:16.032807  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:16.158353  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:16.417483  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:16.418803  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:16.533912  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:16.661129  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:16.915550  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:16.918657  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:17.032512  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:17.158870  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:17.425590  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:17.435008  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:17.533331  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:17.658467  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:17.918353  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:17.921470  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:18.032855  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:18.159003  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:18.414322  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:18.421302  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:18.532791  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:18.658134  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:18.915007  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:18.921138  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:19.032628  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:19.156538  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:19.417666  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:19.419318  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:19.533162  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:19.659099  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:19.918330  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:19.919881  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:20.033250  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:20.157382  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:20.414178  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:20.417163  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:20.532032  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:20.660704  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:20.921078  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:20.922211  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:21.032091  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:21.167831  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:21.418182  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:21.419277  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:21.532399  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:21.659273  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:21.914071  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:21.917525  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:22.031277  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:22.158030  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:22.416717  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:22.424557  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:22.532201  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:22.658026  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:22.915776  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:22.921515  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:23.032157  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:23.157342  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:23.416752  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:23.417327  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:23.532927  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:23.657544  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:23.917857  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:23.918841  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:24.031794  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:24.157127  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:24.423318  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:24.424137  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:24.531567  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:24.671804  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:24.921300  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:24.923137  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:25.031952  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:25.159075  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:25.416738  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:25.417463  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:25.532492  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:25.657839  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:25.914802  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:25.917006  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:26.032029  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:26.156631  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:26.414678  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:26.417163  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:26.532172  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:26.665891  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:26.919317  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:26.922318  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:27.032144  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:27.157230  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:27.414552  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:27.418009  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:27.536457  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:27.658284  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:27.915347  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:27.918400  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:28.032112  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:28.156825  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:28.414697  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:28.417294  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:28.532191  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:28.657877  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:28.916743  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:28.917389  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:29.031683  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:29.157371  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:29.418047  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:29.419112  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:29.531836  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:29.657802  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:29.917154  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:29.921098  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:30.033213  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:30.157907  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:30.417926  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:30.419863  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:30.532455  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:30.658720  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:30.922615  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:30.925808  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:31.032189  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:31.157109  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:31.420203  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:31.420803  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:31.533256  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:31.657908  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:31.915489  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:31.918145  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:32.033462  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:32.156816  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:32.415893  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:32.417708  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:32.531964  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:32.656410  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:32.917406  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:32.918538  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:33.042083  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:33.173060  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:33.416383  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:33.419664  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:33.532277  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:33.657633  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:33.915073  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:33.917490  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:34.032084  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:34.156463  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:34.416579  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:34.417395  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:34.531541  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:34.668280  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:34.916792  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:34.917768  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:35.032294  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:35.156836  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:35.414319  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:35.418065  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:35.531661  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:35.657986  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:35.916437  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:35.918027  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:36.032131  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:36.157972  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:36.428971  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:36.430154  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:36.533198  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:36.658107  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:36.920192  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:36.926698  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:37.032616  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:37.158388  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:37.419101  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:37.419925  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:37.531799  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:37.659912  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:37.924880  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:37.924740  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:38.031661  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:38.157403  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:38.417261  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:38.418951  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:38.531315  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:38.657721  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:38.919126  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:38.923307  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:39.032104  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:39.156539  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:39.414998  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:39.418997  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:39.533410  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:39.656548  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:39.916297  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:39.916516  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:40.032049  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:40.156860  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:40.417945  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:40.419405  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:40.538290  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:40.660822  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:40.934878  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:40.935745  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:41.032437  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:41.157882  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:41.416074  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:41.418643  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:41.532251  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:41.660389  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:41.921115  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:41.922258  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:42.032109  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:42.163746  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:42.417695  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:42.424574  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:42.532730  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:42.658339  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:42.937846  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:42.945736  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:43.032839  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:43.157171  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:43.414786  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:43.422895  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:43.534302  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:43.657775  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:43.914324  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:43.916813  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 03:22:44.032519  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:44.157017  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:44.416156  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:44.416803  725437 kapi.go:107] duration metric: took 1m3.506807982s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 03:22:44.532215  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:44.656569  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:44.915014  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:45.042251  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:45.162173  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:45.415081  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:45.533768  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:45.673957  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:45.916464  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:46.033566  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:46.158332  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:46.426522  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:46.535098  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:46.657729  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:46.915932  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:47.032299  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:47.157879  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:47.415632  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:47.532659  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:47.658086  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:47.915165  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:48.031826  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:48.158061  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:48.415241  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:48.532210  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:48.657601  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:48.915126  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:49.041230  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:49.156955  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:49.414375  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:49.531667  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:49.657911  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:49.917177  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:50.033028  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:50.157820  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:50.416376  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:50.532635  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:50.656487  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:50.915126  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:51.031515  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:51.157237  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:51.415196  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:51.532304  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:51.661477  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:51.914684  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:52.032334  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:52.157828  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:52.415093  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:52.531913  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 03:22:52.682381  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:52.924548  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:53.037540  725437 kapi.go:107] duration metric: took 1m8.009599451s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 03:22:53.039578  725437 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-005301 cluster.
	I0116 03:22:53.041615  725437 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 03:22:53.043601  725437 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 03:22:53.159303  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:53.421191  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:53.656811  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:53.914330  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:54.156715  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:54.417232  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:54.657483  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:54.916238  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:55.163327  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:55.415231  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:55.656707  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:55.922000  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:56.156699  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:56.415573  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:56.657305  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:56.918924  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:57.158877  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:57.415156  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:57.656805  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:57.916090  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:58.159919  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:58.415343  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:58.660717  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:58.914208  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:59.157015  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:59.414369  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:22:59.657047  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:22:59.916045  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:00.174119  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:23:00.416280  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:00.659007  725437 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 03:23:00.914544  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:01.157273  725437 kapi.go:107] duration metric: took 1m20.006251582s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 03:23:01.414838  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:01.914910  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:02.414532  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:02.914897  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:03.414799  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:03.915112  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:04.415030  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:04.915056  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:05.415013  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:05.914383  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:06.415024  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:06.914383  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:07.414988  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:07.914312  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:08.415242  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:08.914960  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:09.414737  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:09.914447  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:10.415317  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:10.915208  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:11.417478  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:11.916296  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:12.414448  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:12.915504  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:13.414620  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:13.916510  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:14.414491  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:14.915133  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:15.415473  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:15.915911  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:16.415389  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:16.917028  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:17.415057  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:17.915082  725437 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 03:23:18.414937  725437 kapi.go:107] duration metric: took 1m37.507644376s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 03:23:18.417146  725437 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0116 03:23:18.418771  725437 addons.go:505] enable addons completed in 1m43.956797111s: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0116 03:23:18.418810  725437 start.go:233] waiting for cluster config update ...
	I0116 03:23:18.418847  725437 start.go:242] writing updated cluster config ...
	I0116 03:23:18.419124  725437 ssh_runner.go:195] Run: rm -f paused
	I0116 03:23:18.728293  725437 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:23:18.730501  725437 out.go:177] * Done! kubectl is now configured to use "addons-005301" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 16 03:26:30 addons-005301 crio[895]: time="2024-01-16 03:26:30.640212118Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=bd20c33a-ffc4-4ec1-a02d-0c4d20eeb635 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 03:26:30 addons-005301 crio[895]: time="2024-01-16 03:26:30.640384050Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=bd20c33a-ffc4-4ec1-a02d-0c4d20eeb635 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 03:26:30 addons-005301 crio[895]: time="2024-01-16 03:26:30.641284036Z" level=info msg="Creating container: default/hello-world-app-5d77478584-m2wns/hello-world-app" id=f1a6fa56-def3-4b25-82df-ffe6ad44eb32 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 03:26:30 addons-005301 crio[895]: time="2024-01-16 03:26:30.641374416Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 03:26:30 addons-005301 crio[895]: time="2024-01-16 03:26:30.732328456Z" level=info msg="Created container 13693ea563576048cc9332219b720a947a7496b4c2b1268e9d32bfa7dba987f6: default/hello-world-app-5d77478584-m2wns/hello-world-app" id=f1a6fa56-def3-4b25-82df-ffe6ad44eb32 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 03:26:30 addons-005301 crio[895]: time="2024-01-16 03:26:30.733167314Z" level=info msg="Starting container: 13693ea563576048cc9332219b720a947a7496b4c2b1268e9d32bfa7dba987f6" id=54661c85-ebe5-4642-aef6-32b539f32a91 name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 03:26:30 addons-005301 conmon[7772]: conmon 13693ea563576048cc93 <ninfo>: container 7790 exited with status 1
	Jan 16 03:26:30 addons-005301 crio[895]: time="2024-01-16 03:26:30.751628080Z" level=info msg="Started container" PID=7790 containerID=13693ea563576048cc9332219b720a947a7496b4c2b1268e9d32bfa7dba987f6 description=default/hello-world-app-5d77478584-m2wns/hello-world-app id=54661c85-ebe5-4642-aef6-32b539f32a91 name=/runtime.v1.RuntimeService/StartContainer sandboxID=004cf357649cba649bb9a89ad23bf22d8414f109d48d8ab57e087d652e9e96d6
	Jan 16 03:26:31 addons-005301 crio[895]: time="2024-01-16 03:26:31.369261285Z" level=info msg="Removing container: be05357e58f300cb6047c149ed149d36bae619267527c2c9011ddad3ddb04337" id=db50f121-e3f0-44f4-bf6c-aecac3d4990d name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 16 03:26:31 addons-005301 crio[895]: time="2024-01-16 03:26:31.391818067Z" level=info msg="Removed container be05357e58f300cb6047c149ed149d36bae619267527c2c9011ddad3ddb04337: default/hello-world-app-5d77478584-m2wns/hello-world-app" id=db50f121-e3f0-44f4-bf6c-aecac3d4990d name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 16 03:26:32 addons-005301 crio[895]: time="2024-01-16 03:26:32.106148972Z" level=info msg="Stopping container: 4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41 (timeout: 2s)" id=edfada2e-2959-4ac4-8b93-c6f5b995b0b0 name=/runtime.v1.RuntimeService/StopContainer
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.113806446Z" level=warning msg="Stopping container 4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=edfada2e-2959-4ac4-8b93-c6f5b995b0b0 name=/runtime.v1.RuntimeService/StopContainer
	Jan 16 03:26:34 addons-005301 conmon[5038]: conmon 4dd301eb36429adb5547 <ninfo>: container 5049 exited with status 137
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.255068954Z" level=info msg="Stopped container 4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41: ingress-nginx/ingress-nginx-controller-69cff4fd79-nzwkg/controller" id=edfada2e-2959-4ac4-8b93-c6f5b995b0b0 name=/runtime.v1.RuntimeService/StopContainer
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.255548884Z" level=info msg="Stopping pod sandbox: 51a4306c670ce605568ff5589699db07753b24ad58673ac666f5f7ff6762f080" id=195d1331-99c3-4ca5-a0a7-92195bf76464 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.259153876Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-KW4CMRR3FVK6HVQS - [0:0]\n:KUBE-HP-V3QILALFLE46T5HS - [0:0]\n-X KUBE-HP-KW4CMRR3FVK6HVQS\n-X KUBE-HP-V3QILALFLE46T5HS\nCOMMIT\n"
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.260940434Z" level=info msg="Closing host port tcp:80"
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.260988639Z" level=info msg="Closing host port tcp:443"
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.262492266Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.262516889Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.262667077Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-nzwkg Namespace:ingress-nginx ID:51a4306c670ce605568ff5589699db07753b24ad58673ac666f5f7ff6762f080 UID:d491ba3c-9702-48cd-ac13-f5fa6eb82e99 NetNS:/var/run/netns/d0c5424a-0cbd-415f-a711-c922edd1a167 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.262840732Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-nzwkg from CNI network \"kindnet\" (type=ptp)"
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.281536076Z" level=info msg="Stopped pod sandbox: 51a4306c670ce605568ff5589699db07753b24ad58673ac666f5f7ff6762f080" id=195d1331-99c3-4ca5-a0a7-92195bf76464 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.375864928Z" level=info msg="Removing container: 4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41" id=be59d305-3397-4e6b-8e1d-5436aabf1f56 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 16 03:26:34 addons-005301 crio[895]: time="2024-01-16 03:26:34.399495867Z" level=info msg="Removed container 4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41: ingress-nginx/ingress-nginx-controller-69cff4fd79-nzwkg/controller" id=be59d305-3397-4e6b-8e1d-5436aabf1f56 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	13693ea563576       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             8 seconds ago        Exited              hello-world-app           2                   004cf357649cb       hello-world-app-5d77478584-m2wns
	cc0f6c43a3d0d       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                        About a minute ago   Running             headlamp                  0                   350abf7f1a9d5       headlamp-7ddfbb94ff-kqmk5
	5da0b09d45953       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                              2 minutes ago        Running             nginx                     0                   74824bba4ffe8       nginx
	34397fad9b373       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago        Running             gcp-auth                  0                   e31f04fd2d8bd       gcp-auth-d4c87556c-svgsq
	2ea086ad1b987       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   3 minutes ago        Exited              patch                     0                   4645f5973ae0b       ingress-nginx-admission-patch-nn5ts
	4a6ce5a594926       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   3 minutes ago        Exited              create                    0                   975f39d798acf       ingress-nginx-admission-create-gsdgz
	085b0a99e70d3       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago        Running             yakd                      0                   d4799cb332323       yakd-dashboard-9947fc6bf-5t5qt
	22e396cfd215a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago        Running             coredns                   0                   0f63c077a1224       coredns-5dd5756b68-j6nhv
	9fe9a8849817a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago        Running             storage-provisioner       0                   0c7e0f39f7ef6       storage-provisioner
	d710728440df4       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago        Running             kindnet-cni               0                   0ba4899abc11b       kindnet-xgz86
	cb07baf727339       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago        Running             kube-proxy                0                   4f7fdcaae2ad7       kube-proxy-824gf
	dccc4e07da8fd       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             5 minutes ago        Running             kube-controller-manager   0                   d7514bdaa81d3       kube-controller-manager-addons-005301
	8edd89b88292b       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             5 minutes ago        Running             kube-apiserver            0                   4ed7076040582       kube-apiserver-addons-005301
	0fead85ce43c7       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             5 minutes ago        Running             kube-scheduler            0                   a9b77ef8ea688       kube-scheduler-addons-005301
	418945a441764       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago        Running             etcd                      0                   c9a232d9fbb6d       etcd-addons-005301
	
	
	==> coredns [22e396cfd215adfeca908ed326d012a6fd2ac16980aad1121825b42c1b22b455] <==
	[INFO] 10.244.0.20:58087 - 10728 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064583s
	[INFO] 10.244.0.20:58087 - 47932 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055746s
	[INFO] 10.244.0.20:58087 - 16629 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005404s
	[INFO] 10.244.0.20:58087 - 14073 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049846s
	[INFO] 10.244.0.20:58087 - 4752 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001008541s
	[INFO] 10.244.0.20:58087 - 39101 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00086276s
	[INFO] 10.244.0.20:58087 - 15180 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060653s
	[INFO] 10.244.0.20:32966 - 55107 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000170406s
	[INFO] 10.244.0.20:33301 - 34033 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000191534s
	[INFO] 10.244.0.20:33301 - 24853 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065765s
	[INFO] 10.244.0.20:33301 - 21482 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046187s
	[INFO] 10.244.0.20:33301 - 45820 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060054s
	[INFO] 10.244.0.20:33301 - 35181 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053211s
	[INFO] 10.244.0.20:33301 - 51090 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056649s
	[INFO] 10.244.0.20:32966 - 60109 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085654s
	[INFO] 10.244.0.20:32966 - 42938 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059217s
	[INFO] 10.244.0.20:32966 - 45074 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040583s
	[INFO] 10.244.0.20:33301 - 57560 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001084046s
	[INFO] 10.244.0.20:32966 - 41681 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073207s
	[INFO] 10.244.0.20:33301 - 30626 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002084193s
	[INFO] 10.244.0.20:32966 - 3808 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039081s
	[INFO] 10.244.0.20:33301 - 13234 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049354s
	[INFO] 10.244.0.20:32966 - 24579 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000822102s
	[INFO] 10.244.0.20:32966 - 54247 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00126643s
	[INFO] 10.244.0.20:32966 - 37914 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044759s
	
	
	==> describe nodes <==
	Name:               addons-005301
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-005301
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=addons-005301
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_21_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-005301
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:21:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-005301
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:26:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:26:27 +0000   Tue, 16 Jan 2024 03:21:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:26:27 +0000   Tue, 16 Jan 2024 03:21:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:26:27 +0000   Tue, 16 Jan 2024 03:21:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:26:27 +0000   Tue, 16 Jan 2024 03:22:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-005301
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f2e6d76ce72403ca1d1f06b495eacd8
	  System UUID:                8224059e-293b-497c-91dd-5f6dc6dda964
	  Boot ID:                    8bf0f894-1a91-4593-91c4-b833f91013d1
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-m2wns         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  gcp-auth                    gcp-auth-d4c87556c-svgsq                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  headlamp                    headlamp-7ddfbb94ff-kqmk5                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 coredns-5dd5756b68-j6nhv                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m4s
	  kube-system                 etcd-addons-005301                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m18s
	  kube-system                 kindnet-xgz86                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m5s
	  kube-system                 kube-apiserver-addons-005301             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-controller-manager-addons-005301    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-proxy-824gf                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-scheduler-addons-005301             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-5t5qt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m59s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node addons-005301 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node addons-005301 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x8 over 5m25s)  kubelet          Node addons-005301 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m18s                  kubelet          Node addons-005301 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s                  kubelet          Node addons-005301 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s                  kubelet          Node addons-005301 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m6s                   node-controller  Node addons-005301 event: Registered Node addons-005301 in Controller
	  Normal  NodeReady                4m34s                  kubelet          Node addons-005301 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001035] FS-Cache: O-key=[8] '186fed0000000000'
	[  +0.000742] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=00000000d147697f
	[  +0.001073] FS-Cache: N-key=[8] '186fed0000000000'
	[  +0.004020] FS-Cache: Duplicate cookie detected
	[  +0.000745] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=0000000029d6e22a{9p.inode} n=00000000a585a583
	[  +0.001058] FS-Cache: O-key=[8] '186fed0000000000'
	[  +0.000721] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=0000000049f251cf
	[  +0.001119] FS-Cache: N-key=[8] '186fed0000000000'
	[  +2.577465] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001021] FS-Cache: O-cookie d=0000000029d6e22a{9p.inode} n=00000000046074fb
	[  +0.001142] FS-Cache: O-key=[8] '176fed0000000000'
	[  +0.000736] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000953] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=00000000d147697f
	[  +0.001134] FS-Cache: N-key=[8] '176fed0000000000'
	[  +0.382475] FS-Cache: Duplicate cookie detected
	[  +0.000796] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000973] FS-Cache: O-cookie d=0000000029d6e22a{9p.inode} n=00000000ee502221
	[  +0.001088] FS-Cache: O-key=[8] '1d6fed0000000000'
	[  +0.000714] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=00000000b565765d
	[  +0.001058] FS-Cache: N-key=[8] '1d6fed0000000000'
	
	
	==> etcd [418945a4417646bec939eeb041f70c497f6f402a3906c7ac4ac0d7601dd9400a] <==
	{"level":"info","ts":"2024-01-16T03:21:35.270082Z","caller":"traceutil/trace.go:171","msg":"trace[1342265535] linearizableReadLoop","detail":"{readStateIndex:383; appliedIndex:380; }","duration":"128.510672ms","start":"2024-01-16T03:21:35.141551Z","end":"2024-01-16T03:21:35.270062Z","steps":["trace[1342265535] 'read index received'  (duration: 11.390655ms)","trace[1342265535] 'applied index is now lower than readState.Index'  (duration: 117.119123ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:21:35.288146Z","caller":"traceutil/trace.go:171","msg":"trace[1806793935] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"206.679806ms","start":"2024-01-16T03:21:35.081445Z","end":"2024-01-16T03:21:35.288125Z","steps":["trace[1806793935] 'process raft request'  (duration: 188.117524ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:35.288186Z","caller":"traceutil/trace.go:171","msg":"trace[1252538053] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"182.454834ms","start":"2024-01-16T03:21:35.105726Z","end":"2024-01-16T03:21:35.288181Z","steps":["trace[1252538053] 'process raft request'  (duration: 164.28092ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:35.288202Z","caller":"traceutil/trace.go:171","msg":"trace[38586996] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"182.829639ms","start":"2024-01-16T03:21:35.105368Z","end":"2024-01-16T03:21:35.288197Z","steps":["trace[38586996] 'process raft request'  (duration: 164.675417ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:21:35.368236Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.3101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/\" range_end:\"/registry/serviceaccounts/default0\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2024-01-16T03:21:35.441142Z","caller":"traceutil/trace.go:171","msg":"trace[2145952751] range","detail":"{range_begin:/registry/serviceaccounts/default/; range_end:/registry/serviceaccounts/default0; response_count:1; response_revision:373; }","duration":"385.220508ms","start":"2024-01-16T03:21:35.055901Z","end":"2024-01-16T03:21:35.441122Z","steps":["trace[2145952751] 'agreement among raft nodes before linearized reading'  (duration: 232.309132ms)","trace[2145952751] 'range keys from in-memory index tree'  (duration: 79.971946ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:21:35.491402Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:21:35.055887Z","time spent":"435.492352ms","remote":"127.0.0.1:50218","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":195,"request content":"key:\"/registry/serviceaccounts/default/\" range_end:\"/registry/serviceaccounts/default0\" "}
	{"level":"warn","ts":"2024-01-16T03:21:35.492289Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:21:35.081428Z","time spent":"340.798967ms","remote":"127.0.0.1:50210","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5164,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-addons-005301\" mod_revision:304 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-addons-005301\" value_size:5111 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-addons-005301\" > >"}
	{"level":"info","ts":"2024-01-16T03:21:35.517626Z","caller":"traceutil/trace.go:171","msg":"trace[773032025] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"228.998394ms","start":"2024-01-16T03:21:35.288616Z","end":"2024-01-16T03:21:35.517614Z","steps":["trace[773032025] 'process raft request'  (duration: 203.252292ms)","trace[773032025] 'compare'  (duration: 25.630564ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T03:21:35.517655Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:21:35.105716Z","time spent":"382.763233ms","remote":"127.0.0.1:50112","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":689,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns.17aab5cb9fc26d33\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns.17aab5cb9fc26d33\" value_size:618 lease:8128026528590636215 >> failure:<>"}
	{"level":"warn","ts":"2024-01-16T03:21:35.51741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:21:35.105342Z","time spent":"335.761462ms","remote":"127.0.0.1:50112","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":767,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kindnet-xgz86.17aab5cb867404e6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-xgz86.17aab5cb867404e6\" value_size:690 lease:8128026528590636215 >> failure:<>"}
	{"level":"info","ts":"2024-01-16T03:21:35.582543Z","caller":"traceutil/trace.go:171","msg":"trace[192556193] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"161.265856ms","start":"2024-01-16T03:21:35.421263Z","end":"2024-01-16T03:21:35.582529Z","steps":["trace[192556193] 'process raft request'  (duration: 161.237195ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:35.582737Z","caller":"traceutil/trace.go:171","msg":"trace[362605364] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"161.527331ms","start":"2024-01-16T03:21:35.421203Z","end":"2024-01-16T03:21:35.582731Z","steps":["trace[362605364] 'process raft request'  (duration: 161.224658ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:35.592622Z","caller":"traceutil/trace.go:171","msg":"trace[464397992] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"100.579069ms","start":"2024-01-16T03:21:35.492029Z","end":"2024-01-16T03:21:35.592608Z","steps":["trace[464397992] 'process raft request'  (duration: 100.300601ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:38.484997Z","caller":"traceutil/trace.go:171","msg":"trace[2065030940] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"114.956373ms","start":"2024-01-16T03:21:38.370026Z","end":"2024-01-16T03:21:38.484982Z","steps":["trace[2065030940] 'process raft request'  (duration: 36.400348ms)","trace[2065030940] 'compare'  (duration: 78.26173ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:21:38.509883Z","caller":"traceutil/trace.go:171","msg":"trace[2021858632] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"103.806131ms","start":"2024-01-16T03:21:38.40606Z","end":"2024-01-16T03:21:38.509866Z","steps":["trace[2021858632] 'process raft request'  (duration: 103.60384ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:21:38.623639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.965271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T03:21:38.629317Z","caller":"traceutil/trace.go:171","msg":"trace[1226851660] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:412; }","duration":"143.649378ms","start":"2024-01-16T03:21:38.485649Z","end":"2024-01-16T03:21:38.629298Z","steps":["trace[1226851660] 'agreement among raft nodes before linearized reading'  (duration: 137.932566ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:38.627607Z","caller":"traceutil/trace.go:171","msg":"trace[2040295345] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"110.746921ms","start":"2024-01-16T03:21:38.516841Z","end":"2024-01-16T03:21:38.627588Z","steps":["trace[2040295345] 'process raft request'  (duration: 106.658691ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:38.627781Z","caller":"traceutil/trace.go:171","msg":"trace[506068484] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"111.100138ms","start":"2024-01-16T03:21:38.516675Z","end":"2024-01-16T03:21:38.627775Z","steps":["trace[506068484] 'process raft request'  (duration: 99.693606ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:21:38.62826Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.018181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-01-16T03:21:38.650302Z","caller":"traceutil/trace.go:171","msg":"trace[881971101] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:412; }","duration":"150.054927ms","start":"2024-01-16T03:21:38.500228Z","end":"2024-01-16T03:21:38.650283Z","steps":["trace[881971101] 'agreement among raft nodes before linearized reading'  (duration: 128.000179ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:39.084846Z","caller":"traceutil/trace.go:171","msg":"trace[1235996764] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"172.021133ms","start":"2024-01-16T03:21:38.9128Z","end":"2024-01-16T03:21:39.084821Z","steps":["trace[1235996764] 'process raft request'  (duration: 171.950659ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:39.085546Z","caller":"traceutil/trace.go:171","msg":"trace[999622315] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"191.991426ms","start":"2024-01-16T03:21:38.893544Z","end":"2024-01-16T03:21:39.085535Z","steps":["trace[999622315] 'process raft request'  (duration: 184.854499ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T03:21:39.085747Z","caller":"traceutil/trace.go:171","msg":"trace[1713933762] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"173.106681ms","start":"2024-01-16T03:21:38.912629Z","end":"2024-01-16T03:21:39.085735Z","steps":["trace[1713933762] 'process raft request'  (duration: 172.037175ms)"],"step_count":1}
	
	
	==> gcp-auth [34397fad9b373cc5ac8d449ad621f37463a398eb440a182cf0c0674626cc3c81] <==
	2024/01/16 03:22:52 GCP Auth Webhook started!
	2024/01/16 03:23:30 Ready to marshal response ...
	2024/01/16 03:23:30 Ready to write response ...
	2024/01/16 03:23:37 Ready to marshal response ...
	2024/01/16 03:23:37 Ready to write response ...
	2024/01/16 03:23:54 Ready to marshal response ...
	2024/01/16 03:23:54 Ready to write response ...
	2024/01/16 03:23:54 Ready to marshal response ...
	2024/01/16 03:23:54 Ready to write response ...
	2024/01/16 03:24:25 Ready to marshal response ...
	2024/01/16 03:24:25 Ready to write response ...
	2024/01/16 03:24:25 Ready to marshal response ...
	2024/01/16 03:24:25 Ready to write response ...
	2024/01/16 03:24:35 Ready to marshal response ...
	2024/01/16 03:24:35 Ready to write response ...
	2024/01/16 03:25:26 Ready to marshal response ...
	2024/01/16 03:25:26 Ready to write response ...
	2024/01/16 03:25:26 Ready to marshal response ...
	2024/01/16 03:25:26 Ready to write response ...
	2024/01/16 03:25:26 Ready to marshal response ...
	2024/01/16 03:25:26 Ready to write response ...
	2024/01/16 03:26:13 Ready to marshal response ...
	2024/01/16 03:26:13 Ready to write response ...
	
	
	==> kernel <==
	 03:26:39 up  3:09,  0 users,  load average: 0.42, 1.62, 2.12
	Linux addons-005301 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [d710728440df453f16f499f670fb08b7829512c27202cddcbbc76bccd2994614] <==
	I0116 03:24:35.796291       1 main.go:227] handling current node
	I0116 03:24:45.807715       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:24:45.807740       1 main.go:227] handling current node
	I0116 03:24:55.820220       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:24:55.820248       1 main.go:227] handling current node
	I0116 03:25:05.832790       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:25:05.832816       1 main.go:227] handling current node
	I0116 03:25:15.837036       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:25:15.837066       1 main.go:227] handling current node
	I0116 03:25:25.865365       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:25:25.865400       1 main.go:227] handling current node
	I0116 03:25:35.869738       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:25:35.869760       1 main.go:227] handling current node
	I0116 03:25:45.881306       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:25:45.881337       1 main.go:227] handling current node
	I0116 03:25:55.885635       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:25:55.885664       1 main.go:227] handling current node
	I0116 03:26:05.896487       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:26:05.896516       1 main.go:227] handling current node
	I0116 03:26:15.914273       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:26:15.914300       1 main.go:227] handling current node
	I0116 03:26:25.927003       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:26:25.927031       1 main.go:227] handling current node
	I0116 03:26:35.931483       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:26:35.931513       1 main.go:227] handling current node
	
	
	==> kube-apiserver [8edd89b88292b0432872ea1a259d1761e165eecb2c412f36a0cbe66eab4ac667] <==
	I0116 03:24:13.111796       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0116 03:24:13.214737       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 03:24:13.214863       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 03:24:13.225141       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 03:24:13.225306       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 03:24:13.245379       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 03:24:13.245426       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 03:24:13.250817       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 03:24:13.250931       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 03:24:13.268808       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 03:24:13.268855       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 03:24:13.281349       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 03:24:13.281468       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 03:24:13.293904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 03:24:13.293946       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0116 03:24:14.251270       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0116 03:24:14.293940       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0116 03:24:14.304365       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0116 03:24:36.110482       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0116 03:24:36.114056       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0116 03:24:36.117404       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0116 03:24:51.118506       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0116 03:25:26.288371       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.197.18"}
	I0116 03:26:13.885088       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.237.45"}
	E0116 03:26:31.150633       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [dccc4e07da8fd5f9be227e4cefccbbafc5c9bfdf778ee1da6c6ac425b270e4ac] <==
	I0116 03:25:30.289963       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="12.213357ms"
	I0116 03:25:30.291162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-7ddfbb94ff" duration="36.71µs"
	W0116 03:26:03.483003       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 03:26:03.483036       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 03:26:05.186887       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 03:26:05.186920       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 03:26:06.777151       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 03:26:06.777181       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 03:26:13.620110       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0116 03:26:13.642199       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-m2wns"
	I0116 03:26:13.658598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.984579ms"
	I0116 03:26:13.684464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.401745ms"
	I0116 03:26:13.685500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.129µs"
	I0116 03:26:13.685953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="188.867µs"
	W0116 03:26:15.241233       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 03:26:15.241277       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 03:26:16.351105       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="108.653µs"
	I0116 03:26:17.353980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="95.886µs"
	I0116 03:26:18.350683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.889µs"
	I0116 03:26:31.075733       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0116 03:26:31.081154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="5.203µs"
	I0116 03:26:31.084133       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0116 03:26:31.386779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.418µs"
	W0116 03:26:38.023668       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 03:26:38.023699       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [cb07baf727339831425b188281af6563fad2143ff3633d215c38adb7fcb9c998] <==
	I0116 03:21:40.205053       1 server_others.go:69] "Using iptables proxy"
	I0116 03:21:40.336942       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0116 03:21:40.598174       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0116 03:21:40.607171       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:21:40.608583       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0116 03:21:40.609008       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0116 03:21:40.609118       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:21:40.609417       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:21:40.610831       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:21:40.611682       1 config.go:188] "Starting service config controller"
	I0116 03:21:40.613919       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:21:40.612332       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:21:40.614032       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:21:40.613352       1 config.go:315] "Starting node config controller"
	I0116 03:21:40.614087       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:21:40.716148       1 shared_informer.go:318] Caches are synced for node config
	I0116 03:21:40.716569       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:21:40.720727       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [0fead85ce43c71609b7a785137cee45ac0295e44643c5f290dd164ff3b4e7b03] <==
	W0116 03:21:19.620579       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:21:19.620668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 03:21:19.620655       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:21:19.620739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 03:21:19.620722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:21:19.620807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 03:21:19.620813       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:21:19.620873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 03:21:19.620892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 03:21:19.620935       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:21:19.620961       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:21:19.620944       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 03:21:19.621027       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:21:19.621046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 03:21:19.621090       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:21:19.621106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 03:21:19.621096       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:21:19.621166       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 03:21:19.621183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:21:19.621245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 03:21:19.621147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 03:21:19.621263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 03:21:19.622596       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 03:21:19.622622       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 03:21:21.020349       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 16 03:26:26 addons-005301 kubelet[1352]: E0116 03:26:26.638673    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(b009930c-7cb7-4e4d-b211-15b10b7426d5)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="b009930c-7cb7-4e4d-b211-15b10b7426d5"
	Jan 16 03:26:29 addons-005301 kubelet[1352]: I0116 03:26:29.814696    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnwtn\" (UniqueName: \"kubernetes.io/projected/b009930c-7cb7-4e4d-b211-15b10b7426d5-kube-api-access-lnwtn\") pod \"b009930c-7cb7-4e4d-b211-15b10b7426d5\" (UID: \"b009930c-7cb7-4e4d-b211-15b10b7426d5\") "
	Jan 16 03:26:29 addons-005301 kubelet[1352]: I0116 03:26:29.819344    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b009930c-7cb7-4e4d-b211-15b10b7426d5-kube-api-access-lnwtn" (OuterVolumeSpecName: "kube-api-access-lnwtn") pod "b009930c-7cb7-4e4d-b211-15b10b7426d5" (UID: "b009930c-7cb7-4e4d-b211-15b10b7426d5"). InnerVolumeSpecName "kube-api-access-lnwtn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 03:26:29 addons-005301 kubelet[1352]: I0116 03:26:29.915505    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lnwtn\" (UniqueName: \"kubernetes.io/projected/b009930c-7cb7-4e4d-b211-15b10b7426d5-kube-api-access-lnwtn\") on node \"addons-005301\" DevicePath \"\""
	Jan 16 03:26:30 addons-005301 kubelet[1352]: I0116 03:26:30.363265    1352 scope.go:117] "RemoveContainer" containerID="fbf7ee78e0423d897027e1803b8245ab20a06f558e3e013f4ef7e2ee7690fa15"
	Jan 16 03:26:30 addons-005301 kubelet[1352]: E0116 03:26:30.400380    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b97fbaba3c34c4e20a68602ff556563f57373f8cf5babb6ccf3269ee792f840a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b97fbaba3c34c4e20a68602ff556563f57373f8cf5babb6ccf3269ee792f840a/diff: no such file or directory, extraDiskErr: <nil>
	Jan 16 03:26:30 addons-005301 kubelet[1352]: I0116 03:26:30.638197    1352 scope.go:117] "RemoveContainer" containerID="be05357e58f300cb6047c149ed149d36bae619267527c2c9011ddad3ddb04337"
	Jan 16 03:26:31 addons-005301 kubelet[1352]: I0116 03:26:31.367742    1352 scope.go:117] "RemoveContainer" containerID="be05357e58f300cb6047c149ed149d36bae619267527c2c9011ddad3ddb04337"
	Jan 16 03:26:31 addons-005301 kubelet[1352]: I0116 03:26:31.367936    1352 scope.go:117] "RemoveContainer" containerID="13693ea563576048cc9332219b720a947a7496b4c2b1268e9d32bfa7dba987f6"
	Jan 16 03:26:31 addons-005301 kubelet[1352]: E0116 03:26:31.368223    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-m2wns_default(5c7be63b-1f70-414b-82c8-35ce83e7edb8)\"" pod="default/hello-world-app-5d77478584-m2wns" podUID="5c7be63b-1f70-414b-82c8-35ce83e7edb8"
	Jan 16 03:26:31 addons-005301 kubelet[1352]: I0116 03:26:31.639682    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="077bd85c-8e6b-4751-b55b-ab84df0dae07" path="/var/lib/kubelet/pods/077bd85c-8e6b-4751-b55b-ab84df0dae07/volumes"
	Jan 16 03:26:31 addons-005301 kubelet[1352]: I0116 03:26:31.640105    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="925cb820-a31e-415d-8cd6-2312a4b5c936" path="/var/lib/kubelet/pods/925cb820-a31e-415d-8cd6-2312a4b5c936/volumes"
	Jan 16 03:26:31 addons-005301 kubelet[1352]: I0116 03:26:31.640463    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b009930c-7cb7-4e4d-b211-15b10b7426d5" path="/var/lib/kubelet/pods/b009930c-7cb7-4e4d-b211-15b10b7426d5/volumes"
	Jan 16 03:26:34 addons-005301 kubelet[1352]: I0116 03:26:34.344192    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhgfn\" (UniqueName: \"kubernetes.io/projected/d491ba3c-9702-48cd-ac13-f5fa6eb82e99-kube-api-access-mhgfn\") pod \"d491ba3c-9702-48cd-ac13-f5fa6eb82e99\" (UID: \"d491ba3c-9702-48cd-ac13-f5fa6eb82e99\") "
	Jan 16 03:26:34 addons-005301 kubelet[1352]: I0116 03:26:34.344248    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d491ba3c-9702-48cd-ac13-f5fa6eb82e99-webhook-cert\") pod \"d491ba3c-9702-48cd-ac13-f5fa6eb82e99\" (UID: \"d491ba3c-9702-48cd-ac13-f5fa6eb82e99\") "
	Jan 16 03:26:34 addons-005301 kubelet[1352]: I0116 03:26:34.346488    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d491ba3c-9702-48cd-ac13-f5fa6eb82e99-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d491ba3c-9702-48cd-ac13-f5fa6eb82e99" (UID: "d491ba3c-9702-48cd-ac13-f5fa6eb82e99"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 03:26:34 addons-005301 kubelet[1352]: I0116 03:26:34.347411    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d491ba3c-9702-48cd-ac13-f5fa6eb82e99-kube-api-access-mhgfn" (OuterVolumeSpecName: "kube-api-access-mhgfn") pod "d491ba3c-9702-48cd-ac13-f5fa6eb82e99" (UID: "d491ba3c-9702-48cd-ac13-f5fa6eb82e99"). InnerVolumeSpecName "kube-api-access-mhgfn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 03:26:34 addons-005301 kubelet[1352]: I0116 03:26:34.374485    1352 scope.go:117] "RemoveContainer" containerID="4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41"
	Jan 16 03:26:34 addons-005301 kubelet[1352]: E0116 03:26:34.396592    1352 cadvisor_stats_provider.go:444] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/docker/e892c3c95eccaaf0a5a6dbe4ddae6a9012e825ea4b022193b943e5c59ec5ae4e/crio/crio-4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41\": RecentStats: unable to find data in memory cache]"
	Jan 16 03:26:34 addons-005301 kubelet[1352]: I0116 03:26:34.400860    1352 scope.go:117] "RemoveContainer" containerID="4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41"
	Jan 16 03:26:34 addons-005301 kubelet[1352]: E0116 03:26:34.401226    1352 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41\": container with ID starting with 4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41 not found: ID does not exist" containerID="4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41"
	Jan 16 03:26:34 addons-005301 kubelet[1352]: I0116 03:26:34.401272    1352 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41"} err="failed to get container status \"4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41\": rpc error: code = NotFound desc = could not find container \"4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41\": container with ID starting with 4dd301eb36429adb554743e7659b99e229b94c6d09057d4829296b7fec8b3b41 not found: ID does not exist"
	Jan 16 03:26:34 addons-005301 kubelet[1352]: I0116 03:26:34.445067    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mhgfn\" (UniqueName: \"kubernetes.io/projected/d491ba3c-9702-48cd-ac13-f5fa6eb82e99-kube-api-access-mhgfn\") on node \"addons-005301\" DevicePath \"\""
	Jan 16 03:26:34 addons-005301 kubelet[1352]: I0116 03:26:34.445108    1352 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d491ba3c-9702-48cd-ac13-f5fa6eb82e99-webhook-cert\") on node \"addons-005301\" DevicePath \"\""
	Jan 16 03:26:35 addons-005301 kubelet[1352]: I0116 03:26:35.639405    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d491ba3c-9702-48cd-ac13-f5fa6eb82e99" path="/var/lib/kubelet/pods/d491ba3c-9702-48cd-ac13-f5fa6eb82e99/volumes"
	
	
	==> storage-provisioner [9fe9a8849817ae3bf03f1013ac88b21fd0ef80ecb5dcbe00cbf613e30e239b1b] <==
	I0116 03:22:06.878045       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:22:06.893457       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:22:06.893631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:22:06.906871       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:22:06.907157       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-005301_d5571091-241b-48d0-94a0-4f6dc5520141!
	I0116 03:22:06.907691       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"930bbf47-bdd8-48c8-8e87-0206885fb0a1", APIVersion:"v1", ResourceVersion:"901", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-005301_d5571091-241b-48d0-94a0-4f6dc5520141 became leader
	I0116 03:22:07.013877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-005301_d5571091-241b-48d0-94a0-4f6dc5520141!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-005301 -n addons-005301
helpers_test.go:261: (dbg) Run:  kubectl --context addons-005301 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (167.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (174.32s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-194312 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-194312 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.129150052s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-194312 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-194312 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [737f5e76-29be-4d15-8dce-fc6b61648b5f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [737f5e76-29be-4d15-8dce-fc6b61648b5f] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.003656481s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-194312 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0116 03:33:46.461098  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:35:38.776834  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:38.782127  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:38.792350  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:38.812637  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:38.852872  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:38.933226  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:39.093585  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:39.414111  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:40.054973  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:41.335181  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:43.895383  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:35:49.016410  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-194312 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.988229528s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-194312 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-194312 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0116 03:35:59.256566  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.009371186s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-194312 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-194312 addons disable ingress-dns --alsologtostderr -v=1: (1.214385813s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-194312 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-194312 addons disable ingress --alsologtostderr -v=1: (7.530163365s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-194312
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-194312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "856c66b6cd39232c11855c62129de294dbe278643b0c9f1221e46f80df4e01cf",
	        "Created": "2024-01-16T03:32:04.291434744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 752172,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T03:32:04.607843873Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/856c66b6cd39232c11855c62129de294dbe278643b0c9f1221e46f80df4e01cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/856c66b6cd39232c11855c62129de294dbe278643b0c9f1221e46f80df4e01cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/856c66b6cd39232c11855c62129de294dbe278643b0c9f1221e46f80df4e01cf/hosts",
	        "LogPath": "/var/lib/docker/containers/856c66b6cd39232c11855c62129de294dbe278643b0c9f1221e46f80df4e01cf/856c66b6cd39232c11855c62129de294dbe278643b0c9f1221e46f80df4e01cf-json.log",
	        "Name": "/ingress-addon-legacy-194312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-194312:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-194312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0aba4bfdba931400cafd971cf120e34041ccc3a961225109b158ca810d0c7831-init/diff:/var/lib/docker/overlay2/a206f4642a9a6aaf26e75b007cd03505dc1586f0041014295f47d8b249463698/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0aba4bfdba931400cafd971cf120e34041ccc3a961225109b158ca810d0c7831/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0aba4bfdba931400cafd971cf120e34041ccc3a961225109b158ca810d0c7831/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0aba4bfdba931400cafd971cf120e34041ccc3a961225109b158ca810d0c7831/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-194312",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-194312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-194312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-194312",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-194312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efb9e3060173f1d4b137b4b3368880d6ac780c98d95d3c1edf358ccbfed5969f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/efb9e3060173",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-194312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "856c66b6cd39",
	                        "ingress-addon-legacy-194312"
	                    ],
	                    "NetworkID": "916ff9902ec5920562755e13cb4d5a96a3097a3248702c24eb108bf404b6b1fa",
	                    "EndpointID": "63d788a40b35cbda2301233228023ab8eab4198e6addcc42e815160b7c124f4d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-194312 -n ingress-addon-legacy-194312
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-194312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-194312 logs -n 25: (1.377138304s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-983329                                                      | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-983329                                                      | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-983329 image load --daemon                                  | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-983329               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-983329 image ls                                             | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	| image          | functional-983329 image save                                           | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-983329               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-983329 image rm                                             | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-983329               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-983329 image ls                                             | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	| image          | functional-983329 image load                                           | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-983329 image ls                                             | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	| image          | functional-983329 image save --daemon                                  | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-983329               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-983329                                                      | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-983329                                                      | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-983329                                                      | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-983329 ssh pgrep                                            | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-983329                                                      | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-983329 image build -t                                       | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	|                | localhost/my-image:functional-983329                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-983329 image ls                                             | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	| delete         | -p functional-983329                                                   | functional-983329           | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:31 UTC |
	| start          | -p ingress-addon-legacy-194312                                         | ingress-addon-legacy-194312 | jenkins | v1.32.0 | 16 Jan 24 03:31 UTC | 16 Jan 24 03:33 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-194312                                            | ingress-addon-legacy-194312 | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:33 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-194312                                            | ingress-addon-legacy-194312 | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC | 16 Jan 24 03:33 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-194312                                            | ingress-addon-legacy-194312 | jenkins | v1.32.0 | 16 Jan 24 03:33 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-194312 ip                                         | ingress-addon-legacy-194312 | jenkins | v1.32.0 | 16 Jan 24 03:35 UTC | 16 Jan 24 03:35 UTC |
	| addons         | ingress-addon-legacy-194312                                            | ingress-addon-legacy-194312 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-194312                                            | ingress-addon-legacy-194312 | jenkins | v1.32.0 | 16 Jan 24 03:36 UTC | 16 Jan 24 03:36 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:31:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:31:47.527275  751715 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:31:47.527412  751715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:31:47.527422  751715 out.go:309] Setting ErrFile to fd 2...
	I0116 03:31:47.527429  751715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:31:47.527802  751715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 03:31:47.528364  751715 out.go:303] Setting JSON to false
	I0116 03:31:47.529318  751715 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11657,"bootTime":1705364251,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0116 03:31:47.529409  751715 start.go:138] virtualization:  
	I0116 03:31:47.532842  751715 out.go:177] * [ingress-addon-legacy-194312] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:31:47.535738  751715 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:31:47.538011  751715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:31:47.535935  751715 notify.go:220] Checking for updates...
	I0116 03:31:47.542504  751715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:31:47.544657  751715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	I0116 03:31:47.546607  751715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 03:31:47.548324  751715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:31:47.550664  751715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:31:47.573522  751715 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:31:47.573655  751715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:31:47.657518  751715 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-16 03:31:47.64856174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:31:47.657619  751715 docker.go:295] overlay module found
	I0116 03:31:47.660320  751715 out.go:177] * Using the docker driver based on user configuration
	I0116 03:31:47.662318  751715 start.go:298] selected driver: docker
	I0116 03:31:47.662333  751715 start.go:902] validating driver "docker" against <nil>
	I0116 03:31:47.662346  751715 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:31:47.662963  751715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:31:47.727981  751715 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-16 03:31:47.718668129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:31:47.728162  751715 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:31:47.728427  751715 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:31:47.730541  751715 out.go:177] * Using Docker driver with root privileges
	I0116 03:31:47.732369  751715 cni.go:84] Creating CNI manager for ""
	I0116 03:31:47.732397  751715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 03:31:47.732408  751715 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 03:31:47.732423  751715 start_flags.go:321] config:
	{Name:ingress-addon-legacy-194312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-194312 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:31:47.734617  751715 out.go:177] * Starting control plane node ingress-addon-legacy-194312 in cluster ingress-addon-legacy-194312
	I0116 03:31:47.736697  751715 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 03:31:47.738727  751715 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 03:31:47.740844  751715 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 03:31:47.740933  751715 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 03:31:47.757348  751715 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 03:31:47.757369  751715 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 03:31:47.803862  751715 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0116 03:31:47.803891  751715 cache.go:56] Caching tarball of preloaded images
	I0116 03:31:47.804057  751715 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 03:31:47.806351  751715 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0116 03:31:47.808298  751715 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0116 03:31:47.917888  751715 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0116 03:31:56.463894  751715 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0116 03:31:56.464012  751715 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0116 03:31:57.656638  751715 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0116 03:31:57.657008  751715 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/config.json ...
	I0116 03:31:57.657042  751715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/config.json: {Name:mk1b3dca48e4d625600536baa0ab4ca63a635013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:31:57.657220  751715 cache.go:194] Successfully downloaded all kic artifacts
	I0116 03:31:57.657279  751715 start.go:365] acquiring machines lock for ingress-addon-legacy-194312: {Name:mkf6be5a35062db95a37ca70744653cce778c477 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:31:57.657338  751715 start.go:369] acquired machines lock for "ingress-addon-legacy-194312" in 45.055µs
	I0116 03:31:57.657360  751715 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-194312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-194312 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:31:57.657425  751715 start.go:125] createHost starting for "" (driver="docker")
	I0116 03:31:57.659871  751715 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0116 03:31:57.660106  751715 start.go:159] libmachine.API.Create for "ingress-addon-legacy-194312" (driver="docker")
	I0116 03:31:57.660136  751715 client.go:168] LocalClient.Create starting
	I0116 03:31:57.660222  751715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem
	I0116 03:31:57.660260  751715 main.go:141] libmachine: Decoding PEM data...
	I0116 03:31:57.660280  751715 main.go:141] libmachine: Parsing certificate...
	I0116 03:31:57.660359  751715 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem
	I0116 03:31:57.660389  751715 main.go:141] libmachine: Decoding PEM data...
	I0116 03:31:57.660405  751715 main.go:141] libmachine: Parsing certificate...
	I0116 03:31:57.660757  751715 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-194312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 03:31:57.677057  751715 cli_runner.go:211] docker network inspect ingress-addon-legacy-194312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 03:31:57.677144  751715 network_create.go:281] running [docker network inspect ingress-addon-legacy-194312] to gather additional debugging logs...
	I0116 03:31:57.677166  751715 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-194312
	W0116 03:31:57.693234  751715 cli_runner.go:211] docker network inspect ingress-addon-legacy-194312 returned with exit code 1
	I0116 03:31:57.693265  751715 network_create.go:284] error running [docker network inspect ingress-addon-legacy-194312]: docker network inspect ingress-addon-legacy-194312: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-194312 not found
	I0116 03:31:57.693279  751715 network_create.go:286] output of [docker network inspect ingress-addon-legacy-194312]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-194312 not found
	
	** /stderr **
	I0116 03:31:57.693394  751715 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:31:57.709660  751715 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004ae4b0}
	I0116 03:31:57.709702  751715 network_create.go:124] attempt to create docker network ingress-addon-legacy-194312 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0116 03:31:57.709758  751715 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-194312 ingress-addon-legacy-194312
	I0116 03:31:57.778550  751715 network_create.go:108] docker network ingress-addon-legacy-194312 192.168.49.0/24 created
	I0116 03:31:57.778584  751715 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-194312" container
	I0116 03:31:57.778673  751715 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 03:31:57.794193  751715 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-194312 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-194312 --label created_by.minikube.sigs.k8s.io=true
	I0116 03:31:57.811926  751715 oci.go:103] Successfully created a docker volume ingress-addon-legacy-194312
	I0116 03:31:57.812008  751715 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-194312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-194312 --entrypoint /usr/bin/test -v ingress-addon-legacy-194312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 03:31:59.338748  751715 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-194312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-194312 --entrypoint /usr/bin/test -v ingress-addon-legacy-194312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.526697354s)
	I0116 03:31:59.338778  751715 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-194312
	I0116 03:31:59.338793  751715 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 03:31:59.338813  751715 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 03:31:59.338902  751715 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-194312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 03:32:04.215228  751715 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-194312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.876282168s)
	I0116 03:32:04.215259  751715 kic.go:203] duration metric: took 4.876443 seconds to extract preloaded images to volume
	W0116 03:32:04.215400  751715 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 03:32:04.215504  751715 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 03:32:04.275692  751715 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-194312 --name ingress-addon-legacy-194312 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-194312 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-194312 --network ingress-addon-legacy-194312 --ip 192.168.49.2 --volume ingress-addon-legacy-194312:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 03:32:04.617009  751715 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-194312 --format={{.State.Running}}
	I0116 03:32:04.640950  751715 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-194312 --format={{.State.Status}}
	I0116 03:32:04.662807  751715 cli_runner.go:164] Run: docker exec ingress-addon-legacy-194312 stat /var/lib/dpkg/alternatives/iptables
	I0116 03:32:04.728217  751715 oci.go:144] the created container "ingress-addon-legacy-194312" has a running status.
	I0116 03:32:04.728244  751715 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa...
	I0116 03:32:05.306679  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 03:32:05.306766  751715 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 03:32:05.338791  751715 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-194312 --format={{.State.Status}}
	I0116 03:32:05.369065  751715 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 03:32:05.369084  751715 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-194312 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 03:32:05.439264  751715 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-194312 --format={{.State.Status}}
	I0116 03:32:05.468314  751715 machine.go:88] provisioning docker machine ...
	I0116 03:32:05.468350  751715 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-194312"
	I0116 03:32:05.468425  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:05.493674  751715 main.go:141] libmachine: Using SSH client type: native
	I0116 03:32:05.494100  751715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33497 <nil> <nil>}
	I0116 03:32:05.494112  751715 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-194312 && echo "ingress-addon-legacy-194312" | sudo tee /etc/hostname
	I0116 03:32:05.669379  751715 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-194312
	
	I0116 03:32:05.669492  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:05.699198  751715 main.go:141] libmachine: Using SSH client type: native
	I0116 03:32:05.699595  751715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33497 <nil> <nil>}
	I0116 03:32:05.699613  751715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-194312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-194312/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-194312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:32:05.836970  751715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:32:05.837037  751715 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17967-719286/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-719286/.minikube}
	I0116 03:32:05.837081  751715 ubuntu.go:177] setting up certificates
	I0116 03:32:05.837113  751715 provision.go:83] configureAuth start
	I0116 03:32:05.837188  751715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-194312
	I0116 03:32:05.855021  751715 provision.go:138] copyHostCerts
	I0116 03:32:05.855058  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem
	I0116 03:32:05.855087  751715 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem, removing ...
	I0116 03:32:05.855099  751715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem
	I0116 03:32:05.855171  751715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem (1082 bytes)
	I0116 03:32:05.855243  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem
	I0116 03:32:05.855260  751715 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem, removing ...
	I0116 03:32:05.855264  751715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem
	I0116 03:32:05.855288  751715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem (1123 bytes)
	I0116 03:32:05.855325  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem
	I0116 03:32:05.855339  751715 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem, removing ...
	I0116 03:32:05.855343  751715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem
	I0116 03:32:05.855368  751715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem (1675 bytes)
	I0116 03:32:05.855408  751715 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-194312 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-194312]
	I0116 03:32:06.064488  751715 provision.go:172] copyRemoteCerts
	I0116 03:32:06.064556  751715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:32:06.064600  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:06.082377  751715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa Username:docker}
	I0116 03:32:06.182060  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 03:32:06.182122  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0116 03:32:06.208893  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 03:32:06.208956  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:32:06.236395  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 03:32:06.236503  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:32:06.262707  751715 provision.go:86] duration metric: configureAuth took 425.566498ms
	I0116 03:32:06.262770  751715 ubuntu.go:193] setting minikube options for container-runtime
	I0116 03:32:06.262966  751715 config.go:182] Loaded profile config "ingress-addon-legacy-194312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 03:32:06.263075  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:06.280466  751715 main.go:141] libmachine: Using SSH client type: native
	I0116 03:32:06.280884  751715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33497 <nil> <nil>}
	I0116 03:32:06.280907  751715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:32:06.552805  751715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:32:06.552829  751715 machine.go:91] provisioned docker machine in 1.084489806s
	I0116 03:32:06.552839  751715 client.go:171] LocalClient.Create took 8.892694188s
	I0116 03:32:06.552851  751715 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-194312" took 8.892746242s
	I0116 03:32:06.552860  751715 start.go:300] post-start starting for "ingress-addon-legacy-194312" (driver="docker")
	I0116 03:32:06.552870  751715 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:32:06.552947  751715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:32:06.552994  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:06.570731  751715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa Username:docker}
	I0116 03:32:06.670458  751715 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:32:06.674251  751715 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 03:32:06.674286  751715 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 03:32:06.674298  751715 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 03:32:06.674306  751715 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 03:32:06.674315  751715 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-719286/.minikube/addons for local assets ...
	I0116 03:32:06.674368  751715 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-719286/.minikube/files for local assets ...
	I0116 03:32:06.674452  751715 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem -> 7246212.pem in /etc/ssl/certs
	I0116 03:32:06.674459  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem -> /etc/ssl/certs/7246212.pem
	I0116 03:32:06.674567  751715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:32:06.684677  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem --> /etc/ssl/certs/7246212.pem (1708 bytes)
	I0116 03:32:06.711353  751715 start.go:303] post-start completed in 158.479061ms
	I0116 03:32:06.711696  751715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-194312
	I0116 03:32:06.728733  751715 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/config.json ...
	I0116 03:32:06.728995  751715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:32:06.729046  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:06.746604  751715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa Username:docker}
	I0116 03:32:06.841713  751715 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 03:32:06.846957  751715 start.go:128] duration metric: createHost completed in 9.189518039s
	I0116 03:32:06.846982  751715 start.go:83] releasing machines lock for "ingress-addon-legacy-194312", held for 9.189628974s
	I0116 03:32:06.847045  751715 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-194312
	I0116 03:32:06.863801  751715 ssh_runner.go:195] Run: cat /version.json
	I0116 03:32:06.863858  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:06.863810  751715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:32:06.863960  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:06.884918  751715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa Username:docker}
	I0116 03:32:06.887755  751715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa Username:docker}
	I0116 03:32:07.110745  751715 ssh_runner.go:195] Run: systemctl --version
	I0116 03:32:07.115968  751715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:32:07.261833  751715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:32:07.267138  751715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:32:07.289097  751715 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 03:32:07.289171  751715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:32:07.324054  751715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 03:32:07.324127  751715 start.go:475] detecting cgroup driver to use...
	I0116 03:32:07.324158  751715 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 03:32:07.324211  751715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:32:07.341373  751715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:32:07.354607  751715 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:32:07.354718  751715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:32:07.369898  751715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:32:07.385674  751715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:32:07.483835  751715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:32:07.583224  751715 docker.go:233] disabling docker service ...
	I0116 03:32:07.583333  751715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:32:07.603765  751715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:32:07.616913  751715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:32:07.719219  751715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:32:07.828278  751715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:32:07.840579  751715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:32:07.858476  751715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0116 03:32:07.858538  751715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:32:07.869594  751715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:32:07.869660  751715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:32:07.880673  751715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:32:07.891168  751715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:32:07.901713  751715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:32:07.911722  751715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:32:07.921167  751715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:32:07.930301  751715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:32:08.035254  751715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:32:08.159192  751715 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:32:08.159262  751715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:32:08.163991  751715 start.go:543] Will wait 60s for crictl version
	I0116 03:32:08.164055  751715 ssh_runner.go:195] Run: which crictl
	I0116 03:32:08.168869  751715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:32:08.209496  751715 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 03:32:08.209579  751715 ssh_runner.go:195] Run: crio --version
	I0116 03:32:08.255424  751715 ssh_runner.go:195] Run: crio --version
	I0116 03:32:08.301790  751715 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0116 03:32:08.303593  751715 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-194312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:32:08.320843  751715 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0116 03:32:08.325195  751715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:32:08.337794  751715 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 03:32:08.337859  751715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:32:08.389138  751715 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 03:32:08.389206  751715 ssh_runner.go:195] Run: which lz4
	I0116 03:32:08.393430  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0116 03:32:08.393521  751715 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 03:32:08.397522  751715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 03:32:08.397551  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0116 03:32:10.490618  751715 crio.go:444] Took 2.097114 seconds to copy over tarball
	I0116 03:32:10.490738  751715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 03:32:13.095753  751715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.604954468s)
	I0116 03:32:13.095781  751715 crio.go:451] Took 2.605088 seconds to extract the tarball
	I0116 03:32:13.095791  751715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 03:32:13.445687  751715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:32:13.487676  751715 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 03:32:13.487697  751715 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 03:32:13.487770  751715 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:32:13.487951  751715 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 03:32:13.488023  751715 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 03:32:13.488142  751715 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 03:32:13.488238  751715 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 03:32:13.488310  751715 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0116 03:32:13.488391  751715 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0116 03:32:13.488464  751715 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0116 03:32:13.489318  751715 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 03:32:13.489695  751715 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:32:13.489923  751715 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0116 03:32:13.490068  751715 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0116 03:32:13.490181  751715 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 03:32:13.490289  751715 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0116 03:32:13.490407  751715 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 03:32:13.490602  751715 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 03:32:13.825163  751715 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0116 03:32:13.876389  751715 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0116 03:32:13.876454  751715 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0116 03:32:13.876506  751715 ssh_runner.go:195] Run: which crictl
	I0116 03:32:13.880897  751715 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	W0116 03:32:13.890215  751715 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 03:32:13.890412  751715 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0116 03:32:13.895689  751715 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 03:32:13.895868  751715 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0116 03:32:13.898296  751715 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 03:32:13.898568  751715 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0116 03:32:13.902657  751715 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0116 03:32:13.902860  751715 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0116 03:32:13.905458  751715 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 03:32:13.905645  751715 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0116 03:32:13.907383  751715 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0116 03:32:13.907572  751715 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0116 03:32:13.958097  751715 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0116 03:32:13.992807  751715 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0116 03:32:13.992854  751715 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 03:32:13.992934  751715 ssh_runner.go:195] Run: which crictl
	W0116 03:32:14.030271  751715 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0116 03:32:14.030441  751715 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:32:14.068118  751715 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0116 03:32:14.068177  751715 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 03:32:14.068235  751715 ssh_runner.go:195] Run: which crictl
	I0116 03:32:14.105676  751715 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0116 03:32:14.105730  751715 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 03:32:14.105782  751715 ssh_runner.go:195] Run: which crictl
	I0116 03:32:14.105868  751715 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0116 03:32:14.105887  751715 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0116 03:32:14.105917  751715 ssh_runner.go:195] Run: which crictl
	I0116 03:32:14.105976  751715 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0116 03:32:14.106002  751715 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 03:32:14.106031  751715 ssh_runner.go:195] Run: which crictl
	I0116 03:32:14.106095  751715 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0116 03:32:14.106115  751715 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0116 03:32:14.106134  751715 ssh_runner.go:195] Run: which crictl
	I0116 03:32:14.106195  751715 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0116 03:32:14.262515  751715 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0116 03:32:14.262570  751715 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:32:14.262621  751715 ssh_runner.go:195] Run: which crictl
	I0116 03:32:14.262739  751715 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0116 03:32:14.262781  751715 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0116 03:32:14.262824  751715 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0116 03:32:14.262866  751715 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 03:32:14.262910  751715 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0116 03:32:14.262748  751715 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0116 03:32:14.374879  751715 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0116 03:32:14.374988  751715 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:32:14.375039  751715 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0116 03:32:14.407073  751715 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0116 03:32:14.407113  751715 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0116 03:32:14.407148  751715 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0116 03:32:14.450042  751715 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 03:32:14.450149  751715 cache_images.go:92] LoadImages completed in 962.440256ms
	W0116 03:32:14.450239  751715 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17967-719286/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0116 03:32:14.450327  751715 ssh_runner.go:195] Run: crio config
	I0116 03:32:14.522643  751715 cni.go:84] Creating CNI manager for ""
	I0116 03:32:14.522672  751715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 03:32:14.522703  751715 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:32:14.522724  751715 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-194312 NodeName:ingress-addon-legacy-194312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 03:32:14.522896  751715 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-194312"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:32:14.522983  751715 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-194312 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-194312 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:32:14.523055  751715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0116 03:32:14.533361  751715 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:32:14.533471  751715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:32:14.543035  751715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0116 03:32:14.563265  751715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0116 03:32:14.583086  751715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0116 03:32:14.602815  751715 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0116 03:32:14.606997  751715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:32:14.619360  751715 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312 for IP: 192.168.49.2
	I0116 03:32:14.619400  751715 certs.go:190] acquiring lock for shared ca certs: {Name:mkc1cd6c1048e37282c341d17731487c267a60dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:14.619567  751715 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key
	I0116 03:32:14.619615  751715 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key
	I0116 03:32:14.619676  751715 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.key
	I0116 03:32:14.619690  751715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt with IP's: []
	I0116 03:32:15.081868  751715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt ...
	I0116 03:32:15.081904  751715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: {Name:mka96d044483e54a2f9fa82c832ebb45f23e175d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:15.082136  751715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.key ...
	I0116 03:32:15.082152  751715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.key: {Name:mk0c2ea83303d3a881ab9a4d6a09f8db4d29d0ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:15.082238  751715 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.key.dd3b5fb2
	I0116 03:32:15.082250  751715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 03:32:15.529540  751715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.crt.dd3b5fb2 ...
	I0116 03:32:15.529571  751715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.crt.dd3b5fb2: {Name:mkf6f7a44c56adcec3a4e33b41a3d773d4502147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:15.529752  751715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.key.dd3b5fb2 ...
	I0116 03:32:15.529766  751715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.key.dd3b5fb2: {Name:mk4ff8635d6cf9c596a152e0cc7355393ec2d12a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:15.529848  751715 certs.go:337] copying /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.crt
	I0116 03:32:15.529949  751715 certs.go:341] copying /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.key
	I0116 03:32:15.530008  751715 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.key
	I0116 03:32:15.530025  751715 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.crt with IP's: []
	I0116 03:32:15.974346  751715 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.crt ...
	I0116 03:32:15.974379  751715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.crt: {Name:mk8d966bb460b5e75a2214d4c89a012deec419b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:15.974572  751715 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.key ...
	I0116 03:32:15.974586  751715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.key: {Name:mk4135ca6a6b11d40553101704ca713f87790b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:15.974664  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 03:32:15.974692  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 03:32:15.974704  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 03:32:15.974724  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 03:32:15.974741  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:32:15.974753  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:32:15.974768  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:32:15.974786  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:32:15.974847  751715 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621.pem (1338 bytes)
	W0116 03:32:15.974891  751715 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621_empty.pem, impossibly tiny 0 bytes
	I0116 03:32:15.974913  751715 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:32:15.974947  751715 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:32:15.974974  751715 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:32:15.975014  751715 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem (1675 bytes)
	I0116 03:32:15.975069  751715 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem (1708 bytes)
	I0116 03:32:15.975101  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621.pem -> /usr/share/ca-certificates/724621.pem
	I0116 03:32:15.975121  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem -> /usr/share/ca-certificates/7246212.pem
	I0116 03:32:15.975141  751715 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:32:15.975736  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:32:16.004395  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:32:16.032461  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:32:16.059791  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:32:16.086431  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:32:16.114163  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 03:32:16.142160  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:32:16.169605  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 03:32:16.197229  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621.pem --> /usr/share/ca-certificates/724621.pem (1338 bytes)
	I0116 03:32:16.225016  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem --> /usr/share/ca-certificates/7246212.pem (1708 bytes)
	I0116 03:32:16.252106  751715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:32:16.280028  751715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:32:16.301145  751715 ssh_runner.go:195] Run: openssl version
	I0116 03:32:16.307941  751715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/724621.pem && ln -fs /usr/share/ca-certificates/724621.pem /etc/ssl/certs/724621.pem"
	I0116 03:32:16.319513  751715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/724621.pem
	I0116 03:32:16.324175  751715 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 03:27 /usr/share/ca-certificates/724621.pem
	I0116 03:32:16.324284  751715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/724621.pem
	I0116 03:32:16.332342  751715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/724621.pem /etc/ssl/certs/51391683.0"
	I0116 03:32:16.343334  751715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7246212.pem && ln -fs /usr/share/ca-certificates/7246212.pem /etc/ssl/certs/7246212.pem"
	I0116 03:32:16.354501  751715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7246212.pem
	I0116 03:32:16.358839  751715 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 03:27 /usr/share/ca-certificates/7246212.pem
	I0116 03:32:16.358943  751715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7246212.pem
	I0116 03:32:16.367056  751715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7246212.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:32:16.378486  751715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:32:16.389676  751715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:32:16.394039  751715 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 03:21 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:32:16.394098  751715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:32:16.402223  751715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:32:16.413257  751715 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:32:16.417478  751715 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:32:16.417534  751715 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-194312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-194312 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:32:16.417618  751715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:32:16.417686  751715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:32:16.460195  751715 cri.go:89] found id: ""
	I0116 03:32:16.460308  751715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:32:16.470822  751715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:32:16.481597  751715 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 03:32:16.481716  751715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:32:16.492820  751715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:32:16.492891  751715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 03:32:16.547437  751715 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0116 03:32:16.547642  751715 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:32:16.599225  751715 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 03:32:16.599298  751715 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0116 03:32:16.599337  751715 kubeadm.go:322] OS: Linux
	I0116 03:32:16.599384  751715 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 03:32:16.599433  751715 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 03:32:16.599481  751715 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 03:32:16.599532  751715 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 03:32:16.599581  751715 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 03:32:16.599638  751715 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 03:32:16.687596  751715 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:32:16.687707  751715 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:32:16.687800  751715 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:32:16.919609  751715 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:32:16.921174  751715 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:32:16.921224  751715 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:32:17.024458  751715 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:32:17.030275  751715 out.go:204]   - Generating certificates and keys ...
	I0116 03:32:17.030439  751715 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:32:17.030548  751715 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:32:17.587482  751715 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 03:32:18.279319  751715 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 03:32:18.437305  751715 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 03:32:19.314481  751715 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 03:32:19.894682  751715 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 03:32:19.895002  751715 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-194312 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 03:32:20.402344  751715 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 03:32:20.402735  751715 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-194312 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 03:32:20.806876  751715 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 03:32:21.990581  751715 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 03:32:22.643076  751715 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 03:32:22.643341  751715 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:32:23.131382  751715 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:32:23.339254  751715 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:32:23.588407  751715 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:32:24.267322  751715 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:32:24.270866  751715 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:32:24.273200  751715 out.go:204]   - Booting up control plane ...
	I0116 03:32:24.273297  751715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:32:24.282003  751715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:32:24.282086  751715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:32:24.282184  751715 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:32:24.282526  751715 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:32:36.284635  751715 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002354 seconds
	I0116 03:32:36.284752  751715 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:32:36.295663  751715 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:32:36.814708  751715 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:32:36.814860  751715 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-194312 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 03:32:37.323697  751715 kubeadm.go:322] [bootstrap-token] Using token: ft44x6.pxn59kyxkb7xw2q1
	I0116 03:32:37.326025  751715 out.go:204]   - Configuring RBAC rules ...
	I0116 03:32:37.326165  751715 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:32:37.331450  751715 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:32:37.349057  751715 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:32:37.352594  751715 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:32:37.360493  751715 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:32:37.364190  751715 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:32:37.380384  751715 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:32:37.721548  751715 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:32:37.800126  751715 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:32:37.800149  751715 kubeadm.go:322] 
	I0116 03:32:37.800207  751715 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:32:37.800216  751715 kubeadm.go:322] 
	I0116 03:32:37.800298  751715 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:32:37.800308  751715 kubeadm.go:322] 
	I0116 03:32:37.800333  751715 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:32:37.800393  751715 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:32:37.800453  751715 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:32:37.800461  751715 kubeadm.go:322] 
	I0116 03:32:37.800510  751715 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:32:37.800584  751715 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:32:37.800652  751715 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:32:37.800660  751715 kubeadm.go:322] 
	I0116 03:32:37.800739  751715 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:32:37.800815  751715 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:32:37.800823  751715 kubeadm.go:322] 
	I0116 03:32:37.800902  751715 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ft44x6.pxn59kyxkb7xw2q1 \
	I0116 03:32:37.801013  751715 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 \
	I0116 03:32:37.801039  751715 kubeadm.go:322]     --control-plane 
	I0116 03:32:37.801046  751715 kubeadm.go:322] 
	I0116 03:32:37.801126  751715 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:32:37.801134  751715 kubeadm.go:322] 
	I0116 03:32:37.801211  751715 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ft44x6.pxn59kyxkb7xw2q1 \
	I0116 03:32:37.801313  751715 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 
	I0116 03:32:37.804157  751715 kubeadm.go:322] W0116 03:32:16.546866    1224 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0116 03:32:37.804368  751715 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 03:32:37.804475  751715 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:32:37.804598  751715 kubeadm.go:322] W0116 03:32:24.277587    1224 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 03:32:37.804717  751715 kubeadm.go:322] W0116 03:32:24.278821    1224 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 03:32:37.804737  751715 cni.go:84] Creating CNI manager for ""
	I0116 03:32:37.804746  751715 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 03:32:37.806750  751715 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 03:32:37.808644  751715 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:32:37.813923  751715 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0116 03:32:37.813943  751715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:32:37.836058  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:32:38.294391  751715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:32:38.294510  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:38.294586  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=ingress-addon-legacy-194312 minikube.k8s.io/updated_at=2024_01_16T03_32_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:38.445433  751715 ops.go:34] apiserver oom_adj: -16
	I0116 03:32:38.445520  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:38.946047  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:39.445587  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:39.945901  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:40.445654  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:40.945696  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:41.445996  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:41.945943  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:42.446539  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:42.946220  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:43.446388  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:43.946332  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:44.446352  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:44.945986  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:45.445602  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:45.945900  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:46.446594  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:46.945889  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:47.445635  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:47.945662  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:48.446364  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:48.946236  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:49.446284  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:49.946326  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:50.446178  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:50.946257  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:51.445571  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:51.945915  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:52.445886  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:52.945670  751715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:32:53.039844  751715 kubeadm.go:1088] duration metric: took 14.745374227s to wait for elevateKubeSystemPrivileges.
	I0116 03:32:53.039879  751715 kubeadm.go:406] StartCluster complete in 36.62235589s
	I0116 03:32:53.039896  751715 settings.go:142] acquiring lock: {Name:mk09c1af0296e0da2e97c553b187ecf4aec5fda4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:53.039958  751715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:32:53.040701  751715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/kubeconfig: {Name:mk79a070d6b32850c1522eb5f09a1fb050b71442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:32:53.040928  751715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:32:53.041221  751715 config.go:182] Loaded profile config "ingress-addon-legacy-194312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 03:32:53.041333  751715 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:32:53.041409  751715 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-194312"
	I0116 03:32:53.041425  751715 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-194312"
	I0116 03:32:53.041477  751715 host.go:66] Checking if "ingress-addon-legacy-194312" exists ...
	I0116 03:32:53.041441  751715 kapi.go:59] client config for ingress-addon-legacy-194312: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.key", CAFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:32:53.041941  751715 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-194312 --format={{.State.Status}}
	I0116 03:32:53.042424  751715 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-194312"
	I0116 03:32:53.042445  751715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-194312"
	I0116 03:32:53.042698  751715 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 03:32:53.042713  751715 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-194312 --format={{.State.Status}}
	I0116 03:32:53.080355  751715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:32:53.083866  751715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:32:53.083892  751715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:32:53.083954  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:53.101422  751715 kapi.go:59] client config for ingress-addon-legacy-194312: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.key", CAFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:32:53.101678  751715 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-194312"
	I0116 03:32:53.101710  751715 host.go:66] Checking if "ingress-addon-legacy-194312" exists ...
	I0116 03:32:53.102150  751715 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-194312 --format={{.State.Status}}
	I0116 03:32:53.128501  751715 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:32:53.128523  751715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:32:53.128581  751715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-194312
	I0116 03:32:53.157515  751715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa Username:docker}
	I0116 03:32:53.158444  751715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33497 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/ingress-addon-legacy-194312/id_rsa Username:docker}
	I0116 03:32:53.310865  751715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:32:53.321925  751715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:32:53.386896  751715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:32:53.614930  751715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-194312" context rescaled to 1 replicas
	I0116 03:32:53.614970  751715 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:32:53.623356  751715 out.go:177] * Verifying Kubernetes components...
	I0116 03:32:53.627664  751715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:32:53.796764  751715 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0116 03:32:53.803027  751715 kapi.go:59] client config for ingress-addon-legacy-194312: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.key", CAFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:32:53.803378  751715 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-194312" to be "Ready" ...
	I0116 03:32:53.821648  751715 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 03:32:53.823614  751715 addons.go:505] enable addons completed in 782.276435ms: enabled=[storage-provisioner default-storageclass]
	I0116 03:32:55.806745  751715 node_ready.go:58] node "ingress-addon-legacy-194312" has status "Ready":"False"
	I0116 03:32:58.306320  751715 node_ready.go:58] node "ingress-addon-legacy-194312" has status "Ready":"False"
	I0116 03:33:00.806243  751715 node_ready.go:58] node "ingress-addon-legacy-194312" has status "Ready":"False"
	I0116 03:33:01.306059  751715 node_ready.go:49] node "ingress-addon-legacy-194312" has status "Ready":"True"
	I0116 03:33:01.306086  751715 node_ready.go:38] duration metric: took 7.502662113s waiting for node "ingress-addon-legacy-194312" to be "Ready" ...
	I0116 03:33:01.306100  751715 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:33:01.313055  751715 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-gnz7r" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:03.316737  751715 pod_ready.go:102] pod "coredns-66bff467f8-gnz7r" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 03:32:52 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0116 03:33:05.815834  751715 pod_ready.go:102] pod "coredns-66bff467f8-gnz7r" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 03:32:52 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0116 03:33:07.820641  751715 pod_ready.go:102] pod "coredns-66bff467f8-gnz7r" in "kube-system" namespace has status "Ready":"False"
	I0116 03:33:09.819928  751715 pod_ready.go:92] pod "coredns-66bff467f8-gnz7r" in "kube-system" namespace has status "Ready":"True"
	I0116 03:33:09.819953  751715 pod_ready.go:81] duration metric: took 8.506836444s waiting for pod "coredns-66bff467f8-gnz7r" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:09.819964  751715 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-194312" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:09.824896  751715 pod_ready.go:92] pod "etcd-ingress-addon-legacy-194312" in "kube-system" namespace has status "Ready":"True"
	I0116 03:33:09.824916  751715 pod_ready.go:81] duration metric: took 4.943457ms waiting for pod "etcd-ingress-addon-legacy-194312" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:09.824929  751715 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-194312" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:09.829953  751715 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-194312" in "kube-system" namespace has status "Ready":"True"
	I0116 03:33:09.829978  751715 pod_ready.go:81] duration metric: took 5.041098ms waiting for pod "kube-apiserver-ingress-addon-legacy-194312" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:09.829990  751715 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-194312" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:09.835152  751715 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-194312" in "kube-system" namespace has status "Ready":"True"
	I0116 03:33:09.835172  751715 pod_ready.go:81] duration metric: took 5.174818ms waiting for pod "kube-controller-manager-ingress-addon-legacy-194312" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:09.835182  751715 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4ld2" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:09.841162  751715 pod_ready.go:92] pod "kube-proxy-m4ld2" in "kube-system" namespace has status "Ready":"True"
	I0116 03:33:09.841184  751715 pod_ready.go:81] duration metric: took 5.994648ms waiting for pod "kube-proxy-m4ld2" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:09.841195  751715 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-194312" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:10.014596  751715 request.go:629] Waited for 173.310782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-194312
	I0116 03:33:10.214551  751715 request.go:629] Waited for 197.346515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-194312
	I0116 03:33:10.217186  751715 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-194312" in "kube-system" namespace has status "Ready":"True"
	I0116 03:33:10.217210  751715 pod_ready.go:81] duration metric: took 376.00732ms waiting for pod "kube-scheduler-ingress-addon-legacy-194312" in "kube-system" namespace to be "Ready" ...
	I0116 03:33:10.217222  751715 pod_ready.go:38] duration metric: took 8.911106665s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:33:10.217236  751715 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:33:10.217300  751715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:33:10.231006  751715 api_server.go:72] duration metric: took 16.616004291s to wait for apiserver process to appear ...
	I0116 03:33:10.231030  751715 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:33:10.231048  751715 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0116 03:33:10.239583  751715 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0116 03:33:10.240459  751715 api_server.go:141] control plane version: v1.18.20
	I0116 03:33:10.240483  751715 api_server.go:131] duration metric: took 9.446474ms to wait for apiserver health ...
	I0116 03:33:10.240492  751715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:33:10.414856  751715 request.go:629] Waited for 174.302667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:33:10.420585  751715 system_pods.go:59] 8 kube-system pods found
	I0116 03:33:10.420616  751715 system_pods.go:61] "coredns-66bff467f8-gnz7r" [27d66324-cfeb-4b37-8299-b12f874a33b3] Running
	I0116 03:33:10.420623  751715 system_pods.go:61] "etcd-ingress-addon-legacy-194312" [3d72f985-b435-435e-a2f3-d144f9ddf684] Running
	I0116 03:33:10.420628  751715 system_pods.go:61] "kindnet-xw5sg" [23da5ce0-ec0c-45c8-9f12-fa19d2af56d6] Running
	I0116 03:33:10.420652  751715 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-194312" [e185c04e-e380-4f6c-8605-eea5e8a0e72e] Running
	I0116 03:33:10.420664  751715 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-194312" [ce706c37-83d3-44b0-b41e-5a726023f4e5] Running
	I0116 03:33:10.420670  751715 system_pods.go:61] "kube-proxy-m4ld2" [124874ff-a48f-457c-8510-c6dd38b904f2] Running
	I0116 03:33:10.420675  751715 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-194312" [9093dce5-94fb-4a6d-83fe-72d88f68388d] Running
	I0116 03:33:10.420679  751715 system_pods.go:61] "storage-provisioner" [2a27dbda-4dac-472f-8d19-aca10b288481] Running
	I0116 03:33:10.420686  751715 system_pods.go:74] duration metric: took 180.187677ms to wait for pod list to return data ...
	I0116 03:33:10.420699  751715 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:33:10.615107  751715 request.go:629] Waited for 194.31089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 03:33:10.617402  751715 default_sa.go:45] found service account: "default"
	I0116 03:33:10.617430  751715 default_sa.go:55] duration metric: took 196.723503ms for default service account to be created ...
	I0116 03:33:10.617440  751715 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:33:10.814572  751715 request.go:629] Waited for 197.033315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:33:10.820240  751715 system_pods.go:86] 8 kube-system pods found
	I0116 03:33:10.820270  751715 system_pods.go:89] "coredns-66bff467f8-gnz7r" [27d66324-cfeb-4b37-8299-b12f874a33b3] Running
	I0116 03:33:10.820278  751715 system_pods.go:89] "etcd-ingress-addon-legacy-194312" [3d72f985-b435-435e-a2f3-d144f9ddf684] Running
	I0116 03:33:10.820282  751715 system_pods.go:89] "kindnet-xw5sg" [23da5ce0-ec0c-45c8-9f12-fa19d2af56d6] Running
	I0116 03:33:10.820287  751715 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-194312" [e185c04e-e380-4f6c-8605-eea5e8a0e72e] Running
	I0116 03:33:10.820296  751715 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-194312" [ce706c37-83d3-44b0-b41e-5a726023f4e5] Running
	I0116 03:33:10.820300  751715 system_pods.go:89] "kube-proxy-m4ld2" [124874ff-a48f-457c-8510-c6dd38b904f2] Running
	I0116 03:33:10.820305  751715 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-194312" [9093dce5-94fb-4a6d-83fe-72d88f68388d] Running
	I0116 03:33:10.820316  751715 system_pods.go:89] "storage-provisioner" [2a27dbda-4dac-472f-8d19-aca10b288481] Running
	I0116 03:33:10.820323  751715 system_pods.go:126] duration metric: took 202.861258ms to wait for k8s-apps to be running ...
	I0116 03:33:10.820335  751715 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:33:10.820391  751715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:33:10.834186  751715 system_svc.go:56] duration metric: took 13.840987ms WaitForService to wait for kubelet.
	I0116 03:33:10.834211  751715 kubeadm.go:581] duration metric: took 17.219216946s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:33:10.834230  751715 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:33:11.014614  751715 request.go:629] Waited for 180.2988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0116 03:33:11.017429  751715 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 03:33:11.017460  751715 node_conditions.go:123] node cpu capacity is 2
	I0116 03:33:11.017500  751715 node_conditions.go:105] duration metric: took 183.235759ms to run NodePressure ...
	I0116 03:33:11.017520  751715 start.go:228] waiting for startup goroutines ...
	I0116 03:33:11.017527  751715 start.go:233] waiting for cluster config update ...
	I0116 03:33:11.017541  751715 start.go:242] writing updated cluster config ...
	I0116 03:33:11.017824  751715 ssh_runner.go:195] Run: rm -f paused
	I0116 03:33:11.080740  751715 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0116 03:33:11.083436  751715 out.go:177] 
	W0116 03:33:11.085566  751715 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0116 03:33:11.087661  751715 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0116 03:33:11.089565  751715 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-194312" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 16 03:36:09 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:09.156664610Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=6e22f13e-4bbe-42d4-90fe-755478117079 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 16 03:36:09 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:09.156841670Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=6e22f13e-4bbe-42d4-90fe-755478117079 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jan 16 03:36:09 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:09.157631904Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-lm5j2/hello-world-app" id=e17af6a7-d4b2-4dff-8328-c2ff4e11c9da name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 16 03:36:09 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:09.157716294Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 03:36:09 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:09.231531583Z" level=info msg="Created container 50501897ec131f971068f0dd417fc24c3fd616413de0ec157618e962afc94fb4: default/hello-world-app-5f5d8b66bb-lm5j2/hello-world-app" id=e17af6a7-d4b2-4dff-8328-c2ff4e11c9da name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 16 03:36:09 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:09.232353440Z" level=info msg="Starting container: 50501897ec131f971068f0dd417fc24c3fd616413de0ec157618e962afc94fb4" id=41a605f4-3e38-4811-bde7-2c34d284dc16 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jan 16 03:36:09 ingress-addon-legacy-194312 conmon[3684]: conmon 50501897ec131f971068 <ninfo>: container 3695 exited with status 1
	Jan 16 03:36:09 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:09.246813007Z" level=info msg="Started container" PID=3695 containerID=50501897ec131f971068f0dd417fc24c3fd616413de0ec157618e962afc94fb4 description=default/hello-world-app-5f5d8b66bb-lm5j2/hello-world-app id=41a605f4-3e38-4811-bde7-2c34d284dc16 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=41b0573ae19de88a9f7079a6ecfb0e7677224bb9c20c1adf87a33ed2a989c849
	Jan 16 03:36:09 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:09.741834964Z" level=info msg="Removing container: 8390ae371a5a2a87d1b97c689bd4fbdf1f3d3943ec32d9126b5608af53674bd4" id=a7e28550-2758-416c-a8ce-f20167114c17 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 03:36:09 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:09.765794003Z" level=info msg="Removed container 8390ae371a5a2a87d1b97c689bd4fbdf1f3d3943ec32d9126b5608af53674bd4: default/hello-world-app-5f5d8b66bb-lm5j2/hello-world-app" id=a7e28550-2758-416c-a8ce-f20167114c17 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.641756531Z" level=warning msg="Stopping container 6e272d2a553c27c8f0cc83fc793445ca211596003f094f97386d2b3869c05086 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=9a8b3e09-4307-47b5-8ac2-9ee9c834c3d4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 16 03:36:10 ingress-addon-legacy-194312 conmon[2711]: conmon 6e272d2a553c27c8f0cc <ninfo>: container 2723 exited with status 137
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.799762329Z" level=info msg="Stopped container 6e272d2a553c27c8f0cc83fc793445ca211596003f094f97386d2b3869c05086: ingress-nginx/ingress-nginx-controller-7fcf777cb7-4gkbs/controller" id=99cef16c-9ab8-453a-89e1-ce98921b257f name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.800353046Z" level=info msg="Stopped container 6e272d2a553c27c8f0cc83fc793445ca211596003f094f97386d2b3869c05086: ingress-nginx/ingress-nginx-controller-7fcf777cb7-4gkbs/controller" id=9a8b3e09-4307-47b5-8ac2-9ee9c834c3d4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.800853850Z" level=info msg="Stopping pod sandbox: 7181e71d762f2b51bce058bd56654abfe83226c49e4da31408bc8a175d3ea652" id=1de83227-e8b3-48d4-9d91-8e365d4e3cf2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.803902046Z" level=info msg="Stopping pod sandbox: 7181e71d762f2b51bce058bd56654abfe83226c49e4da31408bc8a175d3ea652" id=ddb5ab63-9d48-47a9-9bcb-6f51df6ddf0e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.804108644Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-BYBHHJZRFMANMQAN - [0:0]\n:KUBE-HP-OS25EVIP4SQRZABC - [0:0]\n-X KUBE-HP-BYBHHJZRFMANMQAN\n-X KUBE-HP-OS25EVIP4SQRZABC\nCOMMIT\n"
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.812842029Z" level=info msg="Closing host port tcp:80"
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.812891983Z" level=info msg="Closing host port tcp:443"
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.814134831Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.814160916Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.814311300Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-4gkbs Namespace:ingress-nginx ID:7181e71d762f2b51bce058bd56654abfe83226c49e4da31408bc8a175d3ea652 UID:555071b1-de7c-4c4b-bf1a-e28f540a6973 NetNS:/var/run/netns/9c5aa5d0-52ad-4096-afff-2f2ceefe5ccc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.814454859Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-4gkbs from CNI network \"kindnet\" (type=ptp)"
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.849571539Z" level=info msg="Stopped pod sandbox: 7181e71d762f2b51bce058bd56654abfe83226c49e4da31408bc8a175d3ea652" id=1de83227-e8b3-48d4-9d91-8e365d4e3cf2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 03:36:10 ingress-addon-legacy-194312 crio[894]: time="2024-01-16 03:36:10.849686593Z" level=info msg="Stopped pod sandbox (already stopped): 7181e71d762f2b51bce058bd56654abfe83226c49e4da31408bc8a175d3ea652" id=ddb5ab63-9d48-47a9-9bcb-6f51df6ddf0e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	50501897ec131       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   7 seconds ago       Exited              hello-world-app           2                   41b0573ae19de       hello-world-app-5f5d8b66bb-lm5j2
	17b2a763d78a0       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                    2 minutes ago       Running             nginx                     0                   746f49fcef75f       nginx
	6e272d2a553c2       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   7181e71d762f2       ingress-nginx-controller-7fcf777cb7-4gkbs
	3f1a8632cb019       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   9444c6ae16cb1       ingress-nginx-admission-patch-w849g
	ba000be8ddc6c       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   9dc0ccc92760c       ingress-nginx-admission-create-hpc45
	9e845077e291f       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   75d3e69f555e3       storage-provisioner
	3ab631e2fc512       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   c719a1ddddf64       coredns-66bff467f8-gnz7r
	d91d002d8bc8f       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   bb1a83dabeb11       kindnet-xw5sg
	7ad0c4e9fa578       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   9a60276d7c56a       kube-proxy-m4ld2
	fb2c4a9d32b87       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   feae50978dbf4       etcd-ingress-addon-legacy-194312
	42c57cdd9b6ab       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   1794d853df863       kube-scheduler-ingress-addon-legacy-194312
	ef525939f75f2       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   f25934085f603       kube-apiserver-ingress-addon-legacy-194312
	3c01c27e4c61d       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   e91f203e4396c       kube-controller-manager-ingress-addon-legacy-194312
	
	
	==> coredns [3ab631e2fc512df92ab8352276231d325498d2ac9798a37bc5c53d3178b3c432] <==
	[INFO] 10.244.0.5:38197 - 42519 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000021604s
	[INFO] 10.244.0.5:38197 - 36476 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.008348111s
	[INFO] 10.244.0.5:57098 - 47740 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.010144104s
	[INFO] 10.244.0.5:38197 - 30525 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002052906s
	[INFO] 10.244.0.5:57098 - 51091 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001242619s
	[INFO] 10.244.0.5:38197 - 21644 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00011374s
	[INFO] 10.244.0.5:57098 - 60967 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003977s
	[INFO] 10.244.0.5:35719 - 24379 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078556s
	[INFO] 10.244.0.5:35719 - 46705 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000045818s
	[INFO] 10.244.0.5:41807 - 25710 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000031672s
	[INFO] 10.244.0.5:35719 - 55376 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040853s
	[INFO] 10.244.0.5:41807 - 10083 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036743s
	[INFO] 10.244.0.5:35719 - 60973 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037563s
	[INFO] 10.244.0.5:41807 - 58501 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002336s
	[INFO] 10.244.0.5:35719 - 11458 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042757s
	[INFO] 10.244.0.5:41807 - 5338 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040336s
	[INFO] 10.244.0.5:35719 - 3450 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055574s
	[INFO] 10.244.0.5:41807 - 13890 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042601s
	[INFO] 10.244.0.5:41807 - 54992 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000103591s
	[INFO] 10.244.0.5:41807 - 36786 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001773364s
	[INFO] 10.244.0.5:35719 - 14905 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003071433s
	[INFO] 10.244.0.5:41807 - 54537 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001730827s
	[INFO] 10.244.0.5:41807 - 50842 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051389s
	[INFO] 10.244.0.5:35719 - 65011 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001229392s
	[INFO] 10.244.0.5:35719 - 35988 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057001s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-194312
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-194312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=ingress-addon-legacy-194312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_32_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:32:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-194312
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:36:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:36:11 +0000   Tue, 16 Jan 2024 03:32:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:36:11 +0000   Tue, 16 Jan 2024 03:32:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:36:11 +0000   Tue, 16 Jan 2024 03:32:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:36:11 +0000   Tue, 16 Jan 2024 03:33:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-194312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 385f084e9f1945f29beeda508da87346
	  System UUID:                106b11bf-65a1-4116-9616-b41521c0bfb2
	  Boot ID:                    8bf0f894-1a91-4593-91c4-b833f91013d1
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-lm5j2                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-gnz7r                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m24s
	  kube-system                 etcd-ingress-addon-legacy-194312                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kindnet-xw5sg                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m24s
	  kube-system                 kube-apiserver-ingress-addon-legacy-194312             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-194312    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-proxy-m4ld2                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-scheduler-ingress-addon-legacy-194312             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m50s (x5 over 3m50s)  kubelet     Node ingress-addon-legacy-194312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x5 over 3m50s)  kubelet     Node ingress-addon-legacy-194312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x4 over 3m50s)  kubelet     Node ingress-addon-legacy-194312 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m36s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m35s                  kubelet     Node ingress-addon-legacy-194312 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s                  kubelet     Node ingress-addon-legacy-194312 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s                  kubelet     Node ingress-addon-legacy-194312 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m22s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m15s                  kubelet     Node ingress-addon-legacy-194312 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001097] FS-Cache: O-key=[8] 'c570ed0000000000'
	[  +0.000749] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001056] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=00000000c6cf4d72
	[  +0.001100] FS-Cache: N-key=[8] 'c570ed0000000000'
	[  +0.004636] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001231] FS-Cache: O-cookie d=0000000029d6e22a{9p.inode} n=000000008f0122ea
	[  +0.001197] FS-Cache: O-key=[8] 'c570ed0000000000'
	[  +0.000769] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000976] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=000000004bbac6e1
	[  +0.001161] FS-Cache: N-key=[8] 'c570ed0000000000'
	[  +2.789451] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=00000029 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000968] FS-Cache: O-cookie d=0000000029d6e22a{9p.inode} n=00000000b0405fe5
	[  +0.001144] FS-Cache: O-key=[8] 'c470ed0000000000'
	[  +0.000705] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=00000000c6cf4d72
	[  +0.001064] FS-Cache: N-key=[8] 'c470ed0000000000'
	[  +0.345600] FS-Cache: Duplicate cookie detected
	[  +0.000705] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001146] FS-Cache: O-cookie d=0000000029d6e22a{9p.inode} n=00000000fd1f03d1
	[  +0.001221] FS-Cache: O-key=[8] 'ca70ed0000000000'
	[  +0.000720] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=00000000057ce508
	[  +0.001118] FS-Cache: N-key=[8] 'ca70ed0000000000'
	
	
	==> etcd [fb2c4a9d32b8720e6e3cf96a6825b2147b3549cc5fb62fe6fc12e9231d433874] <==
	raft2024/01/16 03:32:30 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/16 03:32:30 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/16 03:32:30 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/16 03:32:30 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-16 03:32:30.165351 W | auth: simple token is not cryptographically signed
	2024-01-16 03:32:30.169058 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-16 03:32:30.169519 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/16 03:32:30 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-16 03:32:30.170109 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-16 03:32:30.172824 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-16 03:32:30.172912 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 03:32:30.173161 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/16 03:32:30 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/16 03:32:30 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/16 03:32:30 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/16 03:32:30 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/16 03:32:30 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-16 03:32:30.660129 I | etcdserver: published {Name:ingress-addon-legacy-194312 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-16 03:32:30.660300 I | embed: ready to serve client requests
	2024-01-16 03:32:30.662203 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-16 03:32:30.665174 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-16 03:32:30.675996 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-16 03:32:30.676135 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-16 03:32:30.676181 I | embed: ready to serve client requests
	2024-01-16 03:32:30.677487 I | embed: serving client requests on 192.168.49.2:2379
	
	
	==> kernel <==
	 03:36:16 up  3:18,  0 users,  load average: 0.56, 1.04, 1.66
	Linux ingress-addon-legacy-194312 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [d91d002d8bc8fb77412091ba2dda3e3d027246ff1a4e17b6589b747b5d4e646d] <==
	I0116 03:34:16.188663       1 main.go:227] handling current node
	I0116 03:34:26.191646       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:34:26.191672       1 main.go:227] handling current node
	I0116 03:34:36.194958       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:34:36.195079       1 main.go:227] handling current node
	I0116 03:34:46.199099       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:34:46.199207       1 main.go:227] handling current node
	I0116 03:34:56.203350       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:34:56.203387       1 main.go:227] handling current node
	I0116 03:35:06.207245       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:35:06.207280       1 main.go:227] handling current node
	I0116 03:35:16.219405       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:35:16.219433       1 main.go:227] handling current node
	I0116 03:35:26.222986       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:35:26.223013       1 main.go:227] handling current node
	I0116 03:35:36.227735       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:35:36.227765       1 main.go:227] handling current node
	I0116 03:35:46.231196       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:35:46.231225       1 main.go:227] handling current node
	I0116 03:35:56.234948       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:35:56.234974       1 main.go:227] handling current node
	I0116 03:36:06.242951       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:36:06.242977       1 main.go:227] handling current node
	I0116 03:36:16.246750       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 03:36:16.246851       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ef525939f75f20dd5f61f83a6ce44ca3f0eba95ac92039e99009ca462cf3d801] <==
	I0116 03:32:34.720173       1 naming_controller.go:291] Starting NamingConditionController
	I0116 03:32:34.720192       1 establishing_controller.go:76] Starting EstablishingController
	I0116 03:32:34.720213       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
	I0116 03:32:34.720238       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0116 03:32:34.730213       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0116 03:32:34.730318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 03:32:34.730371       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 03:32:35.506816       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0116 03:32:35.506848       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0116 03:32:35.520271       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0116 03:32:35.523570       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0116 03:32:35.523586       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0116 03:32:35.874662       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 03:32:35.907377       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0116 03:32:36.051432       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0116 03:32:36.052985       1 controller.go:609] quota admission added evaluator for: endpoints
	I0116 03:32:36.059329       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 03:32:36.935523       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0116 03:32:37.693631       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0116 03:32:37.746477       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0116 03:32:41.020965       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 03:32:52.718724       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0116 03:32:52.760988       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0116 03:33:11.949976       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0116 03:33:32.797535       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [3c01c27e4c61d448899f9f7ca37c58d426102c204ac1e426917ebc5edc9ccdbb] <==
	I0116 03:32:52.781947       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4b1ad72b-5979-471c-bff2-0e99fcb1c67a", APIVersion:"apps/v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-gnz7r
	I0116 03:32:52.794435       1 shared_informer.go:230] Caches are synced for HPA 
	I0116 03:32:52.803497       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"327dfc7a-1c92-4404-929f-bdb566a801c3", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-m4ld2
	I0116 03:32:52.803529       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"49aa1865-d613-40d8-ad6e-8bcbabcdf1ab", APIVersion:"apps/v1", ResourceVersion:"230", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-xw5sg
	I0116 03:32:52.803540       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4b1ad72b-5979-471c-bff2-0e99fcb1c67a", APIVersion:"apps/v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-nrr8q
	I0116 03:32:52.995903       1 shared_informer.go:230] Caches are synced for attach detach 
	I0116 03:32:53.089908       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"88cec535-f9a7-4ff3-8d02-7fc2c15cbdb2", APIVersion:"apps/v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0116 03:32:53.140296       1 shared_informer.go:230] Caches are synced for disruption 
	I0116 03:32:53.141415       1 disruption.go:339] Sending events to api server.
	I0116 03:32:53.237254       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"4b1ad72b-5979-471c-bff2-0e99fcb1c67a", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-nrr8q
	I0116 03:32:53.254433       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0116 03:32:53.320178       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 03:32:53.334953       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 03:32:53.335058       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0116 03:32:53.346069       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 03:32:53.351143       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 03:33:02.696692       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0116 03:33:11.932147       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d03a2c73-f5ac-4489-bf14-96b4d3ae1d86", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0116 03:33:11.947214       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9c342301-74e5-4a82-9664-c518fd5f64b8", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-4gkbs
	I0116 03:33:12.002774       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"f0236d92-3926-425a-94c3-3d10889cf594", APIVersion:"batch/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-hpc45
	I0116 03:33:12.045361       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d6d0444d-df0e-43be-a440-31b8605b7adb", APIVersion:"batch/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-w849g
	I0116 03:33:14.335684       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d6d0444d-df0e-43be-a440-31b8605b7adb", APIVersion:"batch/v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 03:33:14.356151       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"f0236d92-3926-425a-94c3-3d10889cf594", APIVersion:"batch/v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 03:35:51.199932       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"131bbfd8-b7c9-4e9c-8f0a-359ddb78556d", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0116 03:35:51.228571       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"298664a7-9cef-4eb8-b813-d9ff300df2c7", APIVersion:"apps/v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-lm5j2
	
	
	==> kube-proxy [7ad0c4e9fa5784407316bd2e4e863aa7ab63af4db5b528aa9050e4ae5b5e7323] <==
	W0116 03:32:54.250506       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0116 03:32:54.261917       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0116 03:32:54.262062       1 server_others.go:186] Using iptables Proxier.
	I0116 03:32:54.262442       1 server.go:583] Version: v1.18.20
	I0116 03:32:54.265408       1 config.go:315] Starting service config controller
	I0116 03:32:54.265454       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0116 03:32:54.269624       1 config.go:133] Starting endpoints config controller
	I0116 03:32:54.269653       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0116 03:32:54.369812       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0116 03:32:54.369815       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [42c57cdd9b6ab03d04e588e77ecda3b497531aa5d845c759db0a34d2134563b2] <==
	W0116 03:32:34.645239       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:32:34.645272       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:32:34.645884       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:32:34.696076       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 03:32:34.696098       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 03:32:34.699180       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0116 03:32:34.705834       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0116 03:32:34.712245       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:32:34.713366       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0116 03:32:34.713597       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:32:34.723777       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 03:32:34.743392       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:32:34.743580       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:32:34.743705       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:32:34.743832       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:32:34.743967       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:32:34.744095       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:32:34.744210       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:32:34.744329       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:32:34.744443       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:32:34.744554       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 03:32:35.629226       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:32:35.697999       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0116 03:32:36.013579       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0116 03:32:52.831264       1 factory.go:503] pod: kube-system/coredns-66bff467f8-gnz7r is already present in the active queue
	
	
	==> kubelet <==
	Jan 16 03:35:55 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:35:55.719840    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4c5fcfa56a7d7b1a30bc6aa428d3b4f0c10267301141c8beb697a9e3f5207fd6
	Jan 16 03:35:55 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:35:55.720214    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8390ae371a5a2a87d1b97c689bd4fbdf1f3d3943ec32d9126b5608af53674bd4
	Jan 16 03:35:55 ingress-addon-legacy-194312 kubelet[1621]: E0116 03:35:55.720486    1621 pod_workers.go:191] Error syncing pod 6ae38957-f7f4-4d15-99ab-169bb3724454 ("hello-world-app-5f5d8b66bb-lm5j2_default(6ae38957-f7f4-4d15-99ab-169bb3724454)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lm5j2_default(6ae38957-f7f4-4d15-99ab-169bb3724454)"
	Jan 16 03:35:56 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:35:56.722193    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8390ae371a5a2a87d1b97c689bd4fbdf1f3d3943ec32d9126b5608af53674bd4
	Jan 16 03:35:56 ingress-addon-legacy-194312 kubelet[1621]: E0116 03:35:56.722423    1621 pod_workers.go:191] Error syncing pod 6ae38957-f7f4-4d15-99ab-169bb3724454 ("hello-world-app-5f5d8b66bb-lm5j2_default(6ae38957-f7f4-4d15-99ab-169bb3724454)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lm5j2_default(6ae38957-f7f4-4d15-99ab-169bb3724454)"
	Jan 16 03:35:57 ingress-addon-legacy-194312 kubelet[1621]: E0116 03:35:57.155605    1621 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 16 03:35:57 ingress-addon-legacy-194312 kubelet[1621]: E0116 03:35:57.155639    1621 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 16 03:35:57 ingress-addon-legacy-194312 kubelet[1621]: E0116 03:35:57.155679    1621 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 16 03:35:57 ingress-addon-legacy-194312 kubelet[1621]: E0116 03:35:57.155708    1621 pod_workers.go:191] Error syncing pod 8ac8ef92-ef3c-43f9-9dc8-c3d344aa483c ("kube-ingress-dns-minikube_kube-system(8ac8ef92-ef3c-43f9-9dc8-c3d344aa483c)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 16 03:36:07 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:07.132592    1621 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-zc48j" (UniqueName: "kubernetes.io/secret/8ac8ef92-ef3c-43f9-9dc8-c3d344aa483c-minikube-ingress-dns-token-zc48j") pod "8ac8ef92-ef3c-43f9-9dc8-c3d344aa483c" (UID: "8ac8ef92-ef3c-43f9-9dc8-c3d344aa483c")
	Jan 16 03:36:07 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:07.136625    1621 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac8ef92-ef3c-43f9-9dc8-c3d344aa483c-minikube-ingress-dns-token-zc48j" (OuterVolumeSpecName: "minikube-ingress-dns-token-zc48j") pod "8ac8ef92-ef3c-43f9-9dc8-c3d344aa483c" (UID: "8ac8ef92-ef3c-43f9-9dc8-c3d344aa483c"). InnerVolumeSpecName "minikube-ingress-dns-token-zc48j". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 03:36:07 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:07.232903    1621 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-zc48j" (UniqueName: "kubernetes.io/secret/8ac8ef92-ef3c-43f9-9dc8-c3d344aa483c-minikube-ingress-dns-token-zc48j") on node "ingress-addon-legacy-194312" DevicePath ""
	Jan 16 03:36:08 ingress-addon-legacy-194312 kubelet[1621]: E0116 03:36:08.631399    1621 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-4gkbs.17aab6970340147a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-4gkbs", UID:"555071b1-de7c-4c4b-bf1a-e28f540a6973", APIVersion:"v1", ResourceVersion:"483", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-194312"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1619c4a2531847a, ext:211004471527, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1619c4a2531847a, ext:211004471527, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-4gkbs.17aab6970340147a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 03:36:08 ingress-addon-legacy-194312 kubelet[1621]: E0116 03:36:08.644327    1621 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-4gkbs.17aab6970340147a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-4gkbs", UID:"555071b1-de7c-4c4b-bf1a-e28f540a6973", APIVersion:"v1", ResourceVersion:"483", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-194312"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1619c4a2531847a, ext:211004471527, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1619c4a25b773fd, ext:211013249130, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-4gkbs.17aab6970340147a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 03:36:09 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:09.154798    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8390ae371a5a2a87d1b97c689bd4fbdf1f3d3943ec32d9126b5608af53674bd4
	Jan 16 03:36:09 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:09.740119    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8390ae371a5a2a87d1b97c689bd4fbdf1f3d3943ec32d9126b5608af53674bd4
	Jan 16 03:36:09 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:09.740349    1621 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 50501897ec131f971068f0dd417fc24c3fd616413de0ec157618e962afc94fb4
	Jan 16 03:36:09 ingress-addon-legacy-194312 kubelet[1621]: E0116 03:36:09.740582    1621 pod_workers.go:191] Error syncing pod 6ae38957-f7f4-4d15-99ab-169bb3724454 ("hello-world-app-5f5d8b66bb-lm5j2_default(6ae38957-f7f4-4d15-99ab-169bb3724454)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-lm5j2_default(6ae38957-f7f4-4d15-99ab-169bb3724454)"
	Jan 16 03:36:11 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:11.140950    1621 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/555071b1-de7c-4c4b-bf1a-e28f540a6973-webhook-cert") pod "555071b1-de7c-4c4b-bf1a-e28f540a6973" (UID: "555071b1-de7c-4c4b-bf1a-e28f540a6973")
	Jan 16 03:36:11 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:11.141005    1621 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-b76lg" (UniqueName: "kubernetes.io/secret/555071b1-de7c-4c4b-bf1a-e28f540a6973-ingress-nginx-token-b76lg") pod "555071b1-de7c-4c4b-bf1a-e28f540a6973" (UID: "555071b1-de7c-4c4b-bf1a-e28f540a6973")
	Jan 16 03:36:11 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:11.148329    1621 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555071b1-de7c-4c4b-bf1a-e28f540a6973-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "555071b1-de7c-4c4b-bf1a-e28f540a6973" (UID: "555071b1-de7c-4c4b-bf1a-e28f540a6973"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 03:36:11 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:11.155340    1621 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555071b1-de7c-4c4b-bf1a-e28f540a6973-ingress-nginx-token-b76lg" (OuterVolumeSpecName: "ingress-nginx-token-b76lg") pod "555071b1-de7c-4c4b-bf1a-e28f540a6973" (UID: "555071b1-de7c-4c4b-bf1a-e28f540a6973"). InnerVolumeSpecName "ingress-nginx-token-b76lg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 03:36:11 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:11.241269    1621 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/555071b1-de7c-4c4b-bf1a-e28f540a6973-webhook-cert") on node "ingress-addon-legacy-194312" DevicePath ""
	Jan 16 03:36:11 ingress-addon-legacy-194312 kubelet[1621]: I0116 03:36:11.241316    1621 reconciler.go:319] Volume detached for volume "ingress-nginx-token-b76lg" (UniqueName: "kubernetes.io/secret/555071b1-de7c-4c4b-bf1a-e28f540a6973-ingress-nginx-token-b76lg") on node "ingress-addon-legacy-194312" DevicePath ""
	Jan 16 03:36:11 ingress-addon-legacy-194312 kubelet[1621]: W0116 03:36:11.744823    1621 pod_container_deletor.go:77] Container "7181e71d762f2b51bce058bd56654abfe83226c49e4da31408bc8a175d3ea652" not found in pod's containers
	
	
	==> storage-provisioner [9e845077e291f4b041feb434bb3e7b73c88b4d678166867bf5be10b984dfeeaf] <==
	I0116 03:33:08.271834       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:33:08.288811       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:33:08.288908       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:33:08.295070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:33:08.295621       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-194312_7bd752ac-f505-45df-815a-47124d4f57ce!
	I0116 03:33:08.295765       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5006ac4d-bb67-4aa7-ad1e-ac8dcb875954", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-194312_7bd752ac-f505-45df-815a-47124d4f57ce became leader
	I0116 03:33:08.396638       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-194312_7bd752ac-f505-45df-815a-47124d4f57ce!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-194312 -n ingress-addon-legacy-194312
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-194312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (174.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-5xhls -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-5xhls -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-5xhls -- sh -c "ping -c 1 192.168.58.1": exit status 1 (220.635695ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-5xhls): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-zwvv5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-zwvv5 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-zwvv5 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (221.251364ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-zwvv5): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-741097
helpers_test.go:235: (dbg) docker inspect multinode-741097:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5",
	        "Created": "2024-01-16T03:42:06.237199188Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 788306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T03:42:06.559796413Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5/hostname",
	        "HostsPath": "/var/lib/docker/containers/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5/hosts",
	        "LogPath": "/var/lib/docker/containers/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5-json.log",
	        "Name": "/multinode-741097",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-741097:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-741097",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e42ec314c9ed882405f2c2a4d8cf0134e378888c8e5138fc404732ff2cb37c9a-init/diff:/var/lib/docker/overlay2/a206f4642a9a6aaf26e75b007cd03505dc1586f0041014295f47d8b249463698/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e42ec314c9ed882405f2c2a4d8cf0134e378888c8e5138fc404732ff2cb37c9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e42ec314c9ed882405f2c2a4d8cf0134e378888c8e5138fc404732ff2cb37c9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e42ec314c9ed882405f2c2a4d8cf0134e378888c8e5138fc404732ff2cb37c9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-741097",
	                "Source": "/var/lib/docker/volumes/multinode-741097/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-741097",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-741097",
	                "name.minikube.sigs.k8s.io": "multinode-741097",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "236e743354e0a240b992e8d646dd24a1ab63acac86538f26d7c78762c390eb9d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33557"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33556"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33553"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33555"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33554"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/236e743354e0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-741097": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "888bdee9b391",
	                        "multinode-741097"
	                    ],
	                    "NetworkID": "27fac52d2020dcc1f866860a36acfbfc6b251469d4e69e2ab95ee3c6796ca9ba",
	                    "EndpointID": "314da09b1c0f98487afb760d6c1bb2217be498221d32911349c1dbf7af83710c",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-741097 -n multinode-741097
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-741097 logs -n 25: (1.541674411s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-010273                           | mount-start-2-010273 | jenkins | v1.32.0 | 16 Jan 24 03:41 UTC | 16 Jan 24 03:41 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-010273 ssh -- ls                    | mount-start-2-010273 | jenkins | v1.32.0 | 16 Jan 24 03:41 UTC | 16 Jan 24 03:41 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-008484                           | mount-start-1-008484 | jenkins | v1.32.0 | 16 Jan 24 03:41 UTC | 16 Jan 24 03:41 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-010273 ssh -- ls                    | mount-start-2-010273 | jenkins | v1.32.0 | 16 Jan 24 03:41 UTC | 16 Jan 24 03:41 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-010273                           | mount-start-2-010273 | jenkins | v1.32.0 | 16 Jan 24 03:41 UTC | 16 Jan 24 03:41 UTC |
	| start   | -p mount-start-2-010273                           | mount-start-2-010273 | jenkins | v1.32.0 | 16 Jan 24 03:41 UTC | 16 Jan 24 03:41 UTC |
	| ssh     | mount-start-2-010273 ssh -- ls                    | mount-start-2-010273 | jenkins | v1.32.0 | 16 Jan 24 03:41 UTC | 16 Jan 24 03:41 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-010273                           | mount-start-2-010273 | jenkins | v1.32.0 | 16 Jan 24 03:41 UTC | 16 Jan 24 03:42 UTC |
	| delete  | -p mount-start-1-008484                           | mount-start-1-008484 | jenkins | v1.32.0 | 16 Jan 24 03:42 UTC | 16 Jan 24 03:42 UTC |
	| start   | -p multinode-741097                               | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:42 UTC | 16 Jan 24 03:44 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- apply -f                   | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- rollout                    | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- get pods -o                | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- get pods -o                | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | busybox-5bc68d56bd-5xhls --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | busybox-5bc68d56bd-zwvv5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | busybox-5bc68d56bd-5xhls --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | busybox-5bc68d56bd-zwvv5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | busybox-5bc68d56bd-5xhls -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | busybox-5bc68d56bd-zwvv5 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- get pods -o                | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | busybox-5bc68d56bd-5xhls                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC |                     |
	|         | busybox-5bc68d56bd-5xhls -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC | 16 Jan 24 03:44 UTC |
	|         | busybox-5bc68d56bd-zwvv5                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-741097 -- exec                       | multinode-741097     | jenkins | v1.32.0 | 16 Jan 24 03:44 UTC |                     |
	|         | busybox-5bc68d56bd-zwvv5 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:42:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:42:00.868606  787862 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:42:00.868745  787862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:42:00.868755  787862 out.go:309] Setting ErrFile to fd 2...
	I0116 03:42:00.868761  787862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:42:00.869017  787862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 03:42:00.869421  787862 out.go:303] Setting JSON to false
	I0116 03:42:00.870264  787862 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12270,"bootTime":1705364251,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0116 03:42:00.870337  787862 start.go:138] virtualization:  
	I0116 03:42:00.873014  787862 out.go:177] * [multinode-741097] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:42:00.875245  787862 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:42:00.877301  787862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:42:00.875385  787862 notify.go:220] Checking for updates...
	I0116 03:42:00.879485  787862 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:42:00.881572  787862 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	I0116 03:42:00.883674  787862 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 03:42:00.885660  787862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:42:00.888034  787862 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:42:00.915522  787862 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:42:00.915647  787862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:42:00.999080  787862 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-16 03:42:00.990195101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:42:00.999178  787862 docker.go:295] overlay module found
	I0116 03:42:01.002273  787862 out.go:177] * Using the docker driver based on user configuration
	I0116 03:42:01.003924  787862 start.go:298] selected driver: docker
	I0116 03:42:01.003941  787862 start.go:902] validating driver "docker" against <nil>
	I0116 03:42:01.003955  787862 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:42:01.004681  787862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:42:01.068584  787862 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-16 03:42:01.060052803 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:42:01.068739  787862 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:42:01.068985  787862 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:42:01.070686  787862 out.go:177] * Using Docker driver with root privileges
	I0116 03:42:01.072756  787862 cni.go:84] Creating CNI manager for ""
	I0116 03:42:01.072772  787862 cni.go:136] 0 nodes found, recommending kindnet
	I0116 03:42:01.072782  787862 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 03:42:01.072797  787862 start_flags.go:321] config:
	{Name:multinode-741097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-741097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:42:01.074792  787862 out.go:177] * Starting control plane node multinode-741097 in cluster multinode-741097
	I0116 03:42:01.076488  787862 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 03:42:01.078282  787862 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 03:42:01.079926  787862 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:42:01.079973  787862 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0116 03:42:01.079996  787862 cache.go:56] Caching tarball of preloaded images
	I0116 03:42:01.080008  787862 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 03:42:01.080084  787862 preload.go:174] Found /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0116 03:42:01.080095  787862 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:42:01.080422  787862 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/config.json ...
	I0116 03:42:01.080462  787862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/config.json: {Name:mkeae11d88102c631877133f02559827aed6c8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:01.097005  787862 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 03:42:01.097029  787862 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 03:42:01.097050  787862 cache.go:194] Successfully downloaded all kic artifacts
	I0116 03:42:01.097112  787862 start.go:365] acquiring machines lock for multinode-741097: {Name:mkd2e1507762dfed61a2af2041ba59d60af0322a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:42:01.097236  787862 start.go:369] acquired machines lock for "multinode-741097" in 102.672µs
	I0116 03:42:01.097263  787862 start.go:93] Provisioning new machine with config: &{Name:multinode-741097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-741097 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:42:01.097347  787862 start.go:125] createHost starting for "" (driver="docker")
	I0116 03:42:01.100693  787862 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0116 03:42:01.100949  787862 start.go:159] libmachine.API.Create for "multinode-741097" (driver="docker")
	I0116 03:42:01.100980  787862 client.go:168] LocalClient.Create starting
	I0116 03:42:01.101095  787862 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem
	I0116 03:42:01.101131  787862 main.go:141] libmachine: Decoding PEM data...
	I0116 03:42:01.101151  787862 main.go:141] libmachine: Parsing certificate...
	I0116 03:42:01.101207  787862 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem
	I0116 03:42:01.101228  787862 main.go:141] libmachine: Decoding PEM data...
	I0116 03:42:01.101243  787862 main.go:141] libmachine: Parsing certificate...
	I0116 03:42:01.101602  787862 cli_runner.go:164] Run: docker network inspect multinode-741097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 03:42:01.120991  787862 cli_runner.go:211] docker network inspect multinode-741097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 03:42:01.121064  787862 network_create.go:281] running [docker network inspect multinode-741097] to gather additional debugging logs...
	I0116 03:42:01.121086  787862 cli_runner.go:164] Run: docker network inspect multinode-741097
	W0116 03:42:01.140261  787862 cli_runner.go:211] docker network inspect multinode-741097 returned with exit code 1
	I0116 03:42:01.140290  787862 network_create.go:284] error running [docker network inspect multinode-741097]: docker network inspect multinode-741097: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-741097 not found
	I0116 03:42:01.140313  787862 network_create.go:286] output of [docker network inspect multinode-741097]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-741097 not found
	
	** /stderr **
	I0116 03:42:01.140399  787862 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:42:01.156907  787862 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3ff8030a7ff0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:cd:78:60:c2} reservation:<nil>}
	I0116 03:42:01.157248  787862 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024d8c80}
	I0116 03:42:01.157269  787862 network_create.go:124] attempt to create docker network multinode-741097 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0116 03:42:01.157332  787862 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-741097 multinode-741097
	I0116 03:42:01.229194  787862 network_create.go:108] docker network multinode-741097 192.168.58.0/24 created
	I0116 03:42:01.229225  787862 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-741097" container
	I0116 03:42:01.229302  787862 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 03:42:01.245126  787862 cli_runner.go:164] Run: docker volume create multinode-741097 --label name.minikube.sigs.k8s.io=multinode-741097 --label created_by.minikube.sigs.k8s.io=true
	I0116 03:42:01.262455  787862 oci.go:103] Successfully created a docker volume multinode-741097
	I0116 03:42:01.262551  787862 cli_runner.go:164] Run: docker run --rm --name multinode-741097-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-741097 --entrypoint /usr/bin/test -v multinode-741097:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 03:42:01.854081  787862 oci.go:107] Successfully prepared a docker volume multinode-741097
	I0116 03:42:01.854129  787862 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:42:01.854148  787862 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 03:42:01.854236  787862 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-741097:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 03:42:06.151457  787862 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-741097:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.297167547s)
	I0116 03:42:06.151492  787862 kic.go:203] duration metric: took 4.297340 seconds to extract preloaded images to volume
	W0116 03:42:06.151627  787862 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 03:42:06.151747  787862 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 03:42:06.221481  787862 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-741097 --name multinode-741097 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-741097 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-741097 --network multinode-741097 --ip 192.168.58.2 --volume multinode-741097:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 03:42:06.569220  787862 cli_runner.go:164] Run: docker container inspect multinode-741097 --format={{.State.Running}}
	I0116 03:42:06.594406  787862 cli_runner.go:164] Run: docker container inspect multinode-741097 --format={{.State.Status}}
	I0116 03:42:06.615871  787862 cli_runner.go:164] Run: docker exec multinode-741097 stat /var/lib/dpkg/alternatives/iptables
	I0116 03:42:06.688506  787862 oci.go:144] the created container "multinode-741097" has a running status.
	I0116 03:42:06.688537  787862 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa...
	I0116 03:42:07.624161  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 03:42:07.624208  787862 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 03:42:07.646532  787862 cli_runner.go:164] Run: docker container inspect multinode-741097 --format={{.State.Status}}
	I0116 03:42:07.666714  787862 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 03:42:07.666736  787862 kic_runner.go:114] Args: [docker exec --privileged multinode-741097 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 03:42:07.738045  787862 cli_runner.go:164] Run: docker container inspect multinode-741097 --format={{.State.Status}}
	I0116 03:42:07.766765  787862 machine.go:88] provisioning docker machine ...
	I0116 03:42:07.766812  787862 ubuntu.go:169] provisioning hostname "multinode-741097"
	I0116 03:42:07.766878  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:07.790579  787862 main.go:141] libmachine: Using SSH client type: native
	I0116 03:42:07.791014  787862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I0116 03:42:07.791034  787862 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-741097 && echo "multinode-741097" | sudo tee /etc/hostname
	I0116 03:42:07.941713  787862 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-741097
	
	I0116 03:42:07.941865  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:07.959705  787862 main.go:141] libmachine: Using SSH client type: native
	I0116 03:42:07.960143  787862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I0116 03:42:07.960168  787862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-741097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-741097/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-741097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:42:08.097255  787862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:42:08.097293  787862 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17967-719286/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-719286/.minikube}
	I0116 03:42:08.097317  787862 ubuntu.go:177] setting up certificates
	I0116 03:42:08.097335  787862 provision.go:83] configureAuth start
	I0116 03:42:08.097408  787862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-741097
	I0116 03:42:08.115209  787862 provision.go:138] copyHostCerts
	I0116 03:42:08.115247  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem
	I0116 03:42:08.115277  787862 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem, removing ...
	I0116 03:42:08.115283  787862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem
	I0116 03:42:08.115362  787862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem (1082 bytes)
	I0116 03:42:08.115445  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem
	I0116 03:42:08.115462  787862 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem, removing ...
	I0116 03:42:08.115466  787862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem
	I0116 03:42:08.115491  787862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem (1123 bytes)
	I0116 03:42:08.115538  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem
	I0116 03:42:08.115553  787862 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem, removing ...
	I0116 03:42:08.115557  787862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem
	I0116 03:42:08.115581  787862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem (1675 bytes)
	I0116 03:42:08.115633  787862 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem org=jenkins.multinode-741097 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-741097]
	I0116 03:42:08.710008  787862 provision.go:172] copyRemoteCerts
	I0116 03:42:08.710102  787862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:42:08.710149  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:08.728108  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa Username:docker}
	I0116 03:42:08.826133  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 03:42:08.826267  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:42:08.853414  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 03:42:08.853474  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:42:08.880545  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 03:42:08.880605  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 03:42:08.907104  787862 provision.go:86] duration metric: configureAuth took 809.751229ms
	I0116 03:42:08.907132  787862 ubuntu.go:193] setting minikube options for container-runtime
	I0116 03:42:08.907328  787862 config.go:182] Loaded profile config "multinode-741097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:42:08.907433  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:08.924562  787862 main.go:141] libmachine: Using SSH client type: native
	I0116 03:42:08.924986  787862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I0116 03:42:08.925007  787862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:42:09.177353  787862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:42:09.177379  787862 machine.go:91] provisioned docker machine in 1.410590601s
	I0116 03:42:09.177389  787862 client.go:171] LocalClient.Create took 8.076402904s
	I0116 03:42:09.177401  787862 start.go:167] duration metric: libmachine.API.Create for "multinode-741097" took 8.076454466s
	I0116 03:42:09.177409  787862 start.go:300] post-start starting for "multinode-741097" (driver="docker")
	I0116 03:42:09.177419  787862 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:42:09.177477  787862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:42:09.177534  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:09.195513  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa Username:docker}
	I0116 03:42:09.294573  787862 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:42:09.298276  787862 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0116 03:42:09.298298  787862 command_runner.go:130] > NAME="Ubuntu"
	I0116 03:42:09.298307  787862 command_runner.go:130] > VERSION_ID="22.04"
	I0116 03:42:09.298313  787862 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0116 03:42:09.298320  787862 command_runner.go:130] > VERSION_CODENAME=jammy
	I0116 03:42:09.298325  787862 command_runner.go:130] > ID=ubuntu
	I0116 03:42:09.298330  787862 command_runner.go:130] > ID_LIKE=debian
	I0116 03:42:09.298336  787862 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0116 03:42:09.298346  787862 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0116 03:42:09.298356  787862 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0116 03:42:09.298365  787862 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0116 03:42:09.298373  787862 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0116 03:42:09.298640  787862 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 03:42:09.298673  787862 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 03:42:09.298689  787862 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 03:42:09.298697  787862 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 03:42:09.298709  787862 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-719286/.minikube/addons for local assets ...
	I0116 03:42:09.298765  787862 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-719286/.minikube/files for local assets ...
	I0116 03:42:09.298854  787862 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem -> 7246212.pem in /etc/ssl/certs
	I0116 03:42:09.298865  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem -> /etc/ssl/certs/7246212.pem
	I0116 03:42:09.298966  787862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:42:09.309003  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem --> /etc/ssl/certs/7246212.pem (1708 bytes)
	I0116 03:42:09.335350  787862 start.go:303] post-start completed in 157.926368ms
	I0116 03:42:09.335714  787862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-741097
	I0116 03:42:09.352032  787862 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/config.json ...
	I0116 03:42:09.352299  787862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:42:09.352350  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:09.368101  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa Username:docker}
	I0116 03:42:09.461794  787862 command_runner.go:130] > 14%!
	(MISSING)I0116 03:42:09.461868  787862 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 03:42:09.466943  787862 command_runner.go:130] > 167G
	I0116 03:42:09.466969  787862 start.go:128] duration metric: createHost completed in 8.369611625s
	I0116 03:42:09.466978  787862 start.go:83] releasing machines lock for "multinode-741097", held for 8.369731569s
	I0116 03:42:09.467041  787862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-741097
	I0116 03:42:09.483821  787862 ssh_runner.go:195] Run: cat /version.json
	I0116 03:42:09.483837  787862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:42:09.483873  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:09.483900  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:09.501950  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa Username:docker}
	I0116 03:42:09.508206  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa Username:docker}
	I0116 03:42:09.596158  787862 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1704759386-17866", "minikube_version": "v1.32.0", "commit": "3c45a4d018cdc90b01d9bcb479fb293aad58ed8f"}
	I0116 03:42:09.596271  787862 ssh_runner.go:195] Run: systemctl --version
	I0116 03:42:09.734104  787862 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 03:42:09.734145  787862 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0116 03:42:09.734169  787862 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0116 03:42:09.734239  787862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:42:09.880473  787862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:42:09.885414  787862 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0116 03:42:09.885443  787862 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0116 03:42:09.885451  787862 command_runner.go:130] > Device: 3ah/58d	Inode: 1304622     Links: 1
	I0116 03:42:09.885459  787862 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:42:09.885466  787862 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0116 03:42:09.885472  787862 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0116 03:42:09.885482  787862 command_runner.go:130] > Change: 2024-01-16 03:20:35.860569315 +0000
	I0116 03:42:09.885489  787862 command_runner.go:130] >  Birth: 2024-01-16 03:20:35.860569315 +0000
	I0116 03:42:09.885921  787862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:42:09.909100  787862 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 03:42:09.909183  787862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:42:09.942454  787862 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0116 03:42:09.942481  787862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 03:42:09.942488  787862 start.go:475] detecting cgroup driver to use...
	I0116 03:42:09.942516  787862 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 03:42:09.942592  787862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:42:09.960608  787862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:42:09.973562  787862 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:42:09.973680  787862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:42:09.988769  787862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:42:10.004913  787862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:42:10.100451  787862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:42:10.117030  787862 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 03:42:10.202381  787862 docker.go:233] disabling docker service ...
	I0116 03:42:10.202454  787862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:42:10.223181  787862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:42:10.236609  787862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:42:10.338933  787862 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 03:42:10.339001  787862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:42:10.351947  787862 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 03:42:10.431360  787862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:42:10.445228  787862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:42:10.463278  787862 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 03:42:10.464621  787862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:42:10.464720  787862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:42:10.476041  787862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:42:10.476151  787862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:42:10.487737  787862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:42:10.499159  787862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:42:10.510740  787862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:42:10.521539  787862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:42:10.530168  787862 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 03:42:10.531288  787862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:42:10.540949  787862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:42:10.625824  787862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:42:10.748859  787862 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:42:10.748936  787862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:42:10.753302  787862 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 03:42:10.753323  787862 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 03:42:10.753334  787862 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0116 03:42:10.753343  787862 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:42:10.753349  787862 command_runner.go:130] > Access: 2024-01-16 03:42:10.734761332 +0000
	I0116 03:42:10.753363  787862 command_runner.go:130] > Modify: 2024-01-16 03:42:10.734761332 +0000
	I0116 03:42:10.753376  787862 command_runner.go:130] > Change: 2024-01-16 03:42:10.734761332 +0000
	I0116 03:42:10.753380  787862 command_runner.go:130] >  Birth: -
	I0116 03:42:10.753398  787862 start.go:543] Will wait 60s for crictl version
	I0116 03:42:10.753451  787862 ssh_runner.go:195] Run: which crictl
	I0116 03:42:10.757192  787862 command_runner.go:130] > /usr/bin/crictl
	I0116 03:42:10.757609  787862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:42:10.792943  787862 command_runner.go:130] > Version:  0.1.0
	I0116 03:42:10.792970  787862 command_runner.go:130] > RuntimeName:  cri-o
	I0116 03:42:10.793103  787862 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0116 03:42:10.793256  787862 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 03:42:10.795671  787862 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 03:42:10.795751  787862 ssh_runner.go:195] Run: crio --version
	I0116 03:42:10.841470  787862 command_runner.go:130] > crio version 1.24.6
	I0116 03:42:10.841493  787862 command_runner.go:130] > Version:          1.24.6
	I0116 03:42:10.841503  787862 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 03:42:10.841517  787862 command_runner.go:130] > GitTreeState:     clean
	I0116 03:42:10.841524  787862 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 03:42:10.841530  787862 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 03:42:10.841535  787862 command_runner.go:130] > Compiler:         gc
	I0116 03:42:10.841541  787862 command_runner.go:130] > Platform:         linux/arm64
	I0116 03:42:10.841550  787862 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:42:10.841560  787862 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:42:10.841568  787862 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:42:10.841574  787862 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:42:10.843454  787862 ssh_runner.go:195] Run: crio --version
	I0116 03:42:10.883238  787862 command_runner.go:130] > crio version 1.24.6
	I0116 03:42:10.883261  787862 command_runner.go:130] > Version:          1.24.6
	I0116 03:42:10.883270  787862 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 03:42:10.883277  787862 command_runner.go:130] > GitTreeState:     clean
	I0116 03:42:10.883284  787862 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 03:42:10.883290  787862 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 03:42:10.883295  787862 command_runner.go:130] > Compiler:         gc
	I0116 03:42:10.883301  787862 command_runner.go:130] > Platform:         linux/arm64
	I0116 03:42:10.883311  787862 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:42:10.883328  787862 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:42:10.883337  787862 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:42:10.883343  787862 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:42:10.887296  787862 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0116 03:42:10.889117  787862 cli_runner.go:164] Run: docker network inspect multinode-741097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:42:10.905743  787862 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0116 03:42:10.910205  787862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:42:10.922911  787862 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:42:10.922984  787862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:42:10.986709  787862 command_runner.go:130] > {
	I0116 03:42:10.986734  787862 command_runner.go:130] >   "images": [
	I0116 03:42:10.986740  787862 command_runner.go:130] >     {
	I0116 03:42:10.986750  787862 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0116 03:42:10.986759  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:10.986767  787862 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 03:42:10.986772  787862 command_runner.go:130] >       ],
	I0116 03:42:10.986777  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:10.986789  787862 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 03:42:10.986802  787862 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0116 03:42:10.986807  787862 command_runner.go:130] >       ],
	I0116 03:42:10.986815  787862 command_runner.go:130] >       "size": "60867618",
	I0116 03:42:10.986820  787862 command_runner.go:130] >       "uid": null,
	I0116 03:42:10.986826  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:10.986837  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:10.986842  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:10.986850  787862 command_runner.go:130] >     },
	I0116 03:42:10.986854  787862 command_runner.go:130] >     {
	I0116 03:42:10.986862  787862 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0116 03:42:10.986870  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:10.986877  787862 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 03:42:10.986884  787862 command_runner.go:130] >       ],
	I0116 03:42:10.986890  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:10.986903  787862 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0116 03:42:10.986913  787862 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0116 03:42:10.986921  787862 command_runner.go:130] >       ],
	I0116 03:42:10.986929  787862 command_runner.go:130] >       "size": "29037500",
	I0116 03:42:10.986937  787862 command_runner.go:130] >       "uid": null,
	I0116 03:42:10.986942  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:10.986947  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:10.986954  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:10.986959  787862 command_runner.go:130] >     },
	I0116 03:42:10.986963  787862 command_runner.go:130] >     {
	I0116 03:42:10.986973  787862 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0116 03:42:10.986979  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:10.986987  787862 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 03:42:10.986993  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987000  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:10.987010  787862 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0116 03:42:10.987022  787862 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0116 03:42:10.987031  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987040  787862 command_runner.go:130] >       "size": "51393451",
	I0116 03:42:10.987045  787862 command_runner.go:130] >       "uid": null,
	I0116 03:42:10.987052  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:10.987057  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:10.987065  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:10.987069  787862 command_runner.go:130] >     },
	I0116 03:42:10.987080  787862 command_runner.go:130] >     {
	I0116 03:42:10.987088  787862 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0116 03:42:10.987095  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:10.987101  787862 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 03:42:10.987108  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987113  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:10.987122  787862 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0116 03:42:10.987134  787862 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0116 03:42:10.987146  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987154  787862 command_runner.go:130] >       "size": "182203183",
	I0116 03:42:10.987159  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:10.987166  787862 command_runner.go:130] >         "value": "0"
	I0116 03:42:10.987170  787862 command_runner.go:130] >       },
	I0116 03:42:10.987176  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:10.987184  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:10.987190  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:10.987194  787862 command_runner.go:130] >     },
	I0116 03:42:10.987198  787862 command_runner.go:130] >     {
	I0116 03:42:10.987206  787862 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0116 03:42:10.987213  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:10.987220  787862 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 03:42:10.987225  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987232  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:10.987242  787862 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0116 03:42:10.987253  787862 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0116 03:42:10.987258  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987266  787862 command_runner.go:130] >       "size": "121119694",
	I0116 03:42:10.987271  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:10.987276  787862 command_runner.go:130] >         "value": "0"
	I0116 03:42:10.987285  787862 command_runner.go:130] >       },
	I0116 03:42:10.987290  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:10.987297  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:10.987302  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:10.987307  787862 command_runner.go:130] >     },
	I0116 03:42:10.987314  787862 command_runner.go:130] >     {
	I0116 03:42:10.987321  787862 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0116 03:42:10.987329  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:10.987335  787862 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 03:42:10.987342  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987347  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:10.987359  787862 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 03:42:10.987369  787862 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0116 03:42:10.987377  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987382  787862 command_runner.go:130] >       "size": "117252916",
	I0116 03:42:10.987387  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:10.987394  787862 command_runner.go:130] >         "value": "0"
	I0116 03:42:10.987399  787862 command_runner.go:130] >       },
	I0116 03:42:10.987406  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:10.987414  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:10.987419  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:10.987423  787862 command_runner.go:130] >     },
	I0116 03:42:10.987430  787862 command_runner.go:130] >     {
	I0116 03:42:10.987438  787862 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0116 03:42:10.987446  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:10.987452  787862 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 03:42:10.987459  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987464  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:10.987479  787862 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0116 03:42:10.987492  787862 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 03:42:10.987497  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987504  787862 command_runner.go:130] >       "size": "69992343",
	I0116 03:42:10.987510  787862 command_runner.go:130] >       "uid": null,
	I0116 03:42:10.987515  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:10.987522  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:10.987527  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:10.987534  787862 command_runner.go:130] >     },
	I0116 03:42:10.987541  787862 command_runner.go:130] >     {
	I0116 03:42:10.987549  787862 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0116 03:42:10.987556  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:10.987563  787862 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 03:42:10.987569  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987575  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:10.987595  787862 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 03:42:10.987608  787862 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0116 03:42:10.987613  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987621  787862 command_runner.go:130] >       "size": "59253556",
	I0116 03:42:10.987625  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:10.987631  787862 command_runner.go:130] >         "value": "0"
	I0116 03:42:10.987637  787862 command_runner.go:130] >       },
	I0116 03:42:10.987643  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:10.987648  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:10.987655  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:10.987659  787862 command_runner.go:130] >     },
	I0116 03:42:10.987666  787862 command_runner.go:130] >     {
	I0116 03:42:10.987677  787862 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0116 03:42:10.987682  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:10.987690  787862 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 03:42:10.987695  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987700  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:10.987711  787862 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0116 03:42:10.987720  787862 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0116 03:42:10.987728  787862 command_runner.go:130] >       ],
	I0116 03:42:10.987733  787862 command_runner.go:130] >       "size": "520014",
	I0116 03:42:10.987738  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:10.987746  787862 command_runner.go:130] >         "value": "65535"
	I0116 03:42:10.987752  787862 command_runner.go:130] >       },
	I0116 03:42:10.987759  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:10.987767  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:10.987772  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:10.987779  787862 command_runner.go:130] >     }
	I0116 03:42:10.987783  787862 command_runner.go:130] >   ]
	I0116 03:42:10.987792  787862 command_runner.go:130] > }
	I0116 03:42:10.987981  787862 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:42:10.987995  787862 crio.go:415] Images already preloaded, skipping extraction
	I0116 03:42:10.988049  787862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 03:42:11.029264  787862 command_runner.go:130] > {
	I0116 03:42:11.029300  787862 command_runner.go:130] >   "images": [
	I0116 03:42:11.029306  787862 command_runner.go:130] >     {
	I0116 03:42:11.029331  787862 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0116 03:42:11.029345  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:11.029372  787862 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 03:42:11.029382  787862 command_runner.go:130] >       ],
	I0116 03:42:11.029398  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:11.029423  787862 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 03:42:11.029450  787862 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0116 03:42:11.029461  787862 command_runner.go:130] >       ],
	I0116 03:42:11.029477  787862 command_runner.go:130] >       "size": "60867618",
	I0116 03:42:11.029494  787862 command_runner.go:130] >       "uid": null,
	I0116 03:42:11.029502  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:11.029509  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:11.029529  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:11.029539  787862 command_runner.go:130] >     },
	I0116 03:42:11.029543  787862 command_runner.go:130] >     {
	I0116 03:42:11.029554  787862 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0116 03:42:11.029562  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:11.029569  787862 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 03:42:11.029577  787862 command_runner.go:130] >       ],
	I0116 03:42:11.029582  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:11.029603  787862 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0116 03:42:11.029615  787862 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0116 03:42:11.029619  787862 command_runner.go:130] >       ],
	I0116 03:42:11.029626  787862 command_runner.go:130] >       "size": "29037500",
	I0116 03:42:11.029631  787862 command_runner.go:130] >       "uid": null,
	I0116 03:42:11.029636  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:11.029640  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:11.029656  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:11.029664  787862 command_runner.go:130] >     },
	I0116 03:42:11.029686  787862 command_runner.go:130] >     {
	I0116 03:42:11.029709  787862 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0116 03:42:11.029718  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:11.029725  787862 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 03:42:11.029733  787862 command_runner.go:130] >       ],
	I0116 03:42:11.029738  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:11.029763  787862 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0116 03:42:11.029786  787862 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0116 03:42:11.029797  787862 command_runner.go:130] >       ],
	I0116 03:42:11.029807  787862 command_runner.go:130] >       "size": "51393451",
	I0116 03:42:11.029813  787862 command_runner.go:130] >       "uid": null,
	I0116 03:42:11.029834  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:11.029846  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:11.029861  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:11.029871  787862 command_runner.go:130] >     },
	I0116 03:42:11.029875  787862 command_runner.go:130] >     {
	I0116 03:42:11.029889  787862 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0116 03:42:11.029909  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:11.029921  787862 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 03:42:11.029939  787862 command_runner.go:130] >       ],
	I0116 03:42:11.029949  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:11.029958  787862 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0116 03:42:11.029981  787862 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0116 03:42:11.030008  787862 command_runner.go:130] >       ],
	I0116 03:42:11.030019  787862 command_runner.go:130] >       "size": "182203183",
	I0116 03:42:11.030028  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:11.030034  787862 command_runner.go:130] >         "value": "0"
	I0116 03:42:11.030041  787862 command_runner.go:130] >       },
	I0116 03:42:11.030058  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:11.030070  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:11.030087  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:11.030099  787862 command_runner.go:130] >     },
	I0116 03:42:11.030104  787862 command_runner.go:130] >     {
	I0116 03:42:11.030115  787862 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0116 03:42:11.030138  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:11.030150  787862 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 03:42:11.030168  787862 command_runner.go:130] >       ],
	I0116 03:42:11.030173  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:11.030189  787862 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0116 03:42:11.030214  787862 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0116 03:42:11.030224  787862 command_runner.go:130] >       ],
	I0116 03:42:11.030241  787862 command_runner.go:130] >       "size": "121119694",
	I0116 03:42:11.030252  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:11.030257  787862 command_runner.go:130] >         "value": "0"
	I0116 03:42:11.030265  787862 command_runner.go:130] >       },
	I0116 03:42:11.030270  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:11.030290  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:11.030301  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:11.030315  787862 command_runner.go:130] >     },
	I0116 03:42:11.030328  787862 command_runner.go:130] >     {
	I0116 03:42:11.030341  787862 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0116 03:42:11.030349  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:11.030372  787862 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 03:42:11.030382  787862 command_runner.go:130] >       ],
	I0116 03:42:11.030398  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:11.030417  787862 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 03:42:11.030443  787862 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0116 03:42:11.030454  787862 command_runner.go:130] >       ],
	I0116 03:42:11.030470  787862 command_runner.go:130] >       "size": "117252916",
	I0116 03:42:11.030484  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:11.030493  787862 command_runner.go:130] >         "value": "0"
	I0116 03:42:11.030498  787862 command_runner.go:130] >       },
	I0116 03:42:11.030518  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:11.030542  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:11.030554  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:11.030563  787862 command_runner.go:130] >     },
	I0116 03:42:11.030567  787862 command_runner.go:130] >     {
	I0116 03:42:11.030578  787862 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0116 03:42:11.030601  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:11.030621  787862 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 03:42:11.030634  787862 command_runner.go:130] >       ],
	I0116 03:42:11.030643  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:11.030652  787862 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0116 03:42:11.030676  787862 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 03:42:11.030686  787862 command_runner.go:130] >       ],
	I0116 03:42:11.030702  787862 command_runner.go:130] >       "size": "69992343",
	I0116 03:42:11.030714  787862 command_runner.go:130] >       "uid": null,
	I0116 03:42:11.030723  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:11.030729  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:11.030747  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:11.030757  787862 command_runner.go:130] >     },
	I0116 03:42:11.030772  787862 command_runner.go:130] >     {
	I0116 03:42:11.030787  787862 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0116 03:42:11.030796  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:11.030802  787862 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 03:42:11.030822  787862 command_runner.go:130] >       ],
	I0116 03:42:11.030834  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:11.030864  787862 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 03:42:11.030882  787862 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0116 03:42:11.030902  787862 command_runner.go:130] >       ],
	I0116 03:42:11.030913  787862 command_runner.go:130] >       "size": "59253556",
	I0116 03:42:11.030928  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:11.030940  787862 command_runner.go:130] >         "value": "0"
	I0116 03:42:11.030945  787862 command_runner.go:130] >       },
	I0116 03:42:11.030950  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:11.030955  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:11.030960  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:11.030977  787862 command_runner.go:130] >     },
	I0116 03:42:11.030984  787862 command_runner.go:130] >     {
	I0116 03:42:11.030992  787862 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0116 03:42:11.030997  787862 command_runner.go:130] >       "repoTags": [
	I0116 03:42:11.031002  787862 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 03:42:11.031007  787862 command_runner.go:130] >       ],
	I0116 03:42:11.031012  787862 command_runner.go:130] >       "repoDigests": [
	I0116 03:42:11.031021  787862 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0116 03:42:11.031030  787862 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0116 03:42:11.031048  787862 command_runner.go:130] >       ],
	I0116 03:42:11.031055  787862 command_runner.go:130] >       "size": "520014",
	I0116 03:42:11.031059  787862 command_runner.go:130] >       "uid": {
	I0116 03:42:11.031065  787862 command_runner.go:130] >         "value": "65535"
	I0116 03:42:11.031069  787862 command_runner.go:130] >       },
	I0116 03:42:11.031074  787862 command_runner.go:130] >       "username": "",
	I0116 03:42:11.031079  787862 command_runner.go:130] >       "spec": null,
	I0116 03:42:11.031084  787862 command_runner.go:130] >       "pinned": false
	I0116 03:42:11.031088  787862 command_runner.go:130] >     }
	I0116 03:42:11.031092  787862 command_runner.go:130] >   ]
	I0116 03:42:11.031096  787862 command_runner.go:130] > }
	I0116 03:42:11.032841  787862 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 03:42:11.032860  787862 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:42:11.032943  787862 ssh_runner.go:195] Run: crio config
	I0116 03:42:11.086991  787862 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 03:42:11.087022  787862 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 03:42:11.087034  787862 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 03:42:11.087039  787862 command_runner.go:130] > #
	I0116 03:42:11.087048  787862 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 03:42:11.087056  787862 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 03:42:11.087069  787862 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 03:42:11.087088  787862 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 03:42:11.087096  787862 command_runner.go:130] > # reload'.
	I0116 03:42:11.087108  787862 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 03:42:11.087119  787862 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 03:42:11.087127  787862 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 03:42:11.087138  787862 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 03:42:11.087142  787862 command_runner.go:130] > [crio]
	I0116 03:42:11.087150  787862 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 03:42:11.087160  787862 command_runner.go:130] > # containers images, in this directory.
	I0116 03:42:11.087173  787862 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0116 03:42:11.087185  787862 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 03:42:11.087387  787862 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0116 03:42:11.087412  787862 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 03:42:11.087424  787862 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 03:42:11.087433  787862 command_runner.go:130] > # storage_driver = "vfs"
	I0116 03:42:11.087440  787862 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 03:42:11.087451  787862 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 03:42:11.087598  787862 command_runner.go:130] > # storage_option = [
	I0116 03:42:11.087611  787862 command_runner.go:130] > # ]
	I0116 03:42:11.087625  787862 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 03:42:11.087637  787862 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 03:42:11.087644  787862 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 03:42:11.087654  787862 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 03:42:11.087663  787862 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 03:42:11.087671  787862 command_runner.go:130] > # always happen on a node reboot
	I0116 03:42:11.087678  787862 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 03:42:11.087688  787862 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 03:42:11.087696  787862 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 03:42:11.087708  787862 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 03:42:11.087718  787862 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 03:42:11.087727  787862 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 03:42:11.087741  787862 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 03:42:11.087746  787862 command_runner.go:130] > # internal_wipe = true
	I0116 03:42:11.087756  787862 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 03:42:11.087764  787862 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 03:42:11.087774  787862 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 03:42:11.087781  787862 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 03:42:11.087795  787862 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 03:42:11.087803  787862 command_runner.go:130] > [crio.api]
	I0116 03:42:11.087810  787862 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 03:42:11.087816  787862 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 03:42:11.087825  787862 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 03:42:11.087835  787862 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 03:42:11.087844  787862 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 03:42:11.087853  787862 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 03:42:11.087858  787862 command_runner.go:130] > # stream_port = "0"
	I0116 03:42:11.087869  787862 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 03:42:11.087874  787862 command_runner.go:130] > # stream_enable_tls = false
	I0116 03:42:11.087882  787862 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 03:42:11.087890  787862 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 03:42:11.087898  787862 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 03:42:11.087905  787862 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 03:42:11.087910  787862 command_runner.go:130] > # minutes.
	I0116 03:42:11.087920  787862 command_runner.go:130] > # stream_tls_cert = ""
	I0116 03:42:11.087928  787862 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 03:42:11.087946  787862 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 03:42:11.087955  787862 command_runner.go:130] > # stream_tls_key = ""
	I0116 03:42:11.087963  787862 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 03:42:11.087973  787862 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 03:42:11.087980  787862 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 03:42:11.087985  787862 command_runner.go:130] > # stream_tls_ca = ""
	I0116 03:42:11.087998  787862 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:42:11.088004  787862 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0116 03:42:11.088016  787862 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:42:11.088022  787862 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0116 03:42:11.088043  787862 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 03:42:11.088055  787862 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 03:42:11.088074  787862 command_runner.go:130] > [crio.runtime]
	I0116 03:42:11.088083  787862 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 03:42:11.088090  787862 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 03:42:11.088095  787862 command_runner.go:130] > # "nofile=1024:2048"
	I0116 03:42:11.088106  787862 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 03:42:11.088112  787862 command_runner.go:130] > # default_ulimits = [
	I0116 03:42:11.088123  787862 command_runner.go:130] > # ]
	I0116 03:42:11.088131  787862 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 03:42:11.088140  787862 command_runner.go:130] > # no_pivot = false
	I0116 03:42:11.088148  787862 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 03:42:11.088159  787862 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 03:42:11.088165  787862 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 03:42:11.088173  787862 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 03:42:11.088179  787862 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 03:42:11.088190  787862 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:42:11.088195  787862 command_runner.go:130] > # conmon = ""
	I0116 03:42:11.088205  787862 command_runner.go:130] > # Cgroup setting for conmon
	I0116 03:42:11.088215  787862 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 03:42:11.088223  787862 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 03:42:11.088230  787862 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 03:42:11.088240  787862 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 03:42:11.088248  787862 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:42:11.088253  787862 command_runner.go:130] > # conmon_env = [
	I0116 03:42:11.088257  787862 command_runner.go:130] > # ]
	I0116 03:42:11.088266  787862 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 03:42:11.088276  787862 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 03:42:11.088283  787862 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 03:42:11.088291  787862 command_runner.go:130] > # default_env = [
	I0116 03:42:11.088296  787862 command_runner.go:130] > # ]
	I0116 03:42:11.088303  787862 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 03:42:11.088491  787862 command_runner.go:130] > # selinux = false
	I0116 03:42:11.088507  787862 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 03:42:11.088516  787862 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 03:42:11.088523  787862 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 03:42:11.088531  787862 command_runner.go:130] > # seccomp_profile = ""
	I0116 03:42:11.088539  787862 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 03:42:11.088551  787862 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 03:42:11.088559  787862 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 03:42:11.088568  787862 command_runner.go:130] > # which might increase security.
	I0116 03:42:11.088575  787862 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0116 03:42:11.088583  787862 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 03:42:11.088597  787862 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 03:42:11.088606  787862 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 03:42:11.088614  787862 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 03:42:11.088624  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:42:11.088630  787862 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 03:42:11.088641  787862 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 03:42:11.088647  787862 command_runner.go:130] > # the cgroup blockio controller.
	I0116 03:42:11.088656  787862 command_runner.go:130] > # blockio_config_file = ""
	I0116 03:42:11.088665  787862 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 03:42:11.088673  787862 command_runner.go:130] > # irqbalance daemon.
	I0116 03:42:11.088680  787862 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 03:42:11.088687  787862 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 03:42:11.088694  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:42:11.088702  787862 command_runner.go:130] > # rdt_config_file = ""
	I0116 03:42:11.088709  787862 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 03:42:11.088718  787862 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 03:42:11.088725  787862 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 03:42:11.088735  787862 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 03:42:11.088744  787862 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 03:42:11.088756  787862 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 03:42:11.088761  787862 command_runner.go:130] > # will be added.
	I0116 03:42:11.088767  787862 command_runner.go:130] > # default_capabilities = [
	I0116 03:42:11.088771  787862 command_runner.go:130] > # 	"CHOWN",
	I0116 03:42:11.088779  787862 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 03:42:11.088785  787862 command_runner.go:130] > # 	"FSETID",
	I0116 03:42:11.088794  787862 command_runner.go:130] > # 	"FOWNER",
	I0116 03:42:11.088798  787862 command_runner.go:130] > # 	"SETGID",
	I0116 03:42:11.088803  787862 command_runner.go:130] > # 	"SETUID",
	I0116 03:42:11.088811  787862 command_runner.go:130] > # 	"SETPCAP",
	I0116 03:42:11.088952  787862 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 03:42:11.088971  787862 command_runner.go:130] > # 	"KILL",
	I0116 03:42:11.088976  787862 command_runner.go:130] > # ]
	I0116 03:42:11.088986  787862 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0116 03:42:11.088994  787862 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0116 03:42:11.089004  787862 command_runner.go:130] > # add_inheritable_capabilities = true
	I0116 03:42:11.089012  787862 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 03:42:11.089022  787862 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:42:11.089027  787862 command_runner.go:130] > # default_sysctls = [
	I0116 03:42:11.089032  787862 command_runner.go:130] > # ]
	I0116 03:42:11.089042  787862 command_runner.go:130] > # List of devices on the host that a
	I0116 03:42:11.089050  787862 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 03:42:11.089061  787862 command_runner.go:130] > # allowed_devices = [
	I0116 03:42:11.089066  787862 command_runner.go:130] > # 	"/dev/fuse",
	I0116 03:42:11.089071  787862 command_runner.go:130] > # ]
	I0116 03:42:11.089076  787862 command_runner.go:130] > # List of additional devices. specified as
	I0116 03:42:11.089106  787862 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 03:42:11.089116  787862 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 03:42:11.089124  787862 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:42:11.089134  787862 command_runner.go:130] > # additional_devices = [
	I0116 03:42:11.089140  787862 command_runner.go:130] > # ]
	I0116 03:42:11.089167  787862 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 03:42:11.089177  787862 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 03:42:11.089182  787862 command_runner.go:130] > # 	"/etc/cdi",
	I0116 03:42:11.089188  787862 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 03:42:11.089195  787862 command_runner.go:130] > # ]
	I0116 03:42:11.089203  787862 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 03:42:11.089214  787862 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 03:42:11.089219  787862 command_runner.go:130] > # Defaults to false.
	I0116 03:42:11.089229  787862 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 03:42:11.089240  787862 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 03:42:11.089247  787862 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 03:42:11.089256  787862 command_runner.go:130] > # hooks_dir = [
	I0116 03:42:11.089262  787862 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 03:42:11.089267  787862 command_runner.go:130] > # ]
	I0116 03:42:11.089278  787862 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 03:42:11.089290  787862 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 03:42:11.089300  787862 command_runner.go:130] > # its default mounts from the following two files:
	I0116 03:42:11.089304  787862 command_runner.go:130] > #
	I0116 03:42:11.089312  787862 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 03:42:11.089320  787862 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 03:42:11.089327  787862 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 03:42:11.089331  787862 command_runner.go:130] > #
	I0116 03:42:11.089342  787862 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 03:42:11.089350  787862 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 03:42:11.089361  787862 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 03:42:11.089368  787862 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 03:42:11.089375  787862 command_runner.go:130] > #
	I0116 03:42:11.089382  787862 command_runner.go:130] > # default_mounts_file = ""
	I0116 03:42:11.089393  787862 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 03:42:11.089404  787862 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 03:42:11.089409  787862 command_runner.go:130] > # pids_limit = 0
	I0116 03:42:11.089421  787862 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 03:42:11.089428  787862 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 03:42:11.089439  787862 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 03:42:11.089449  787862 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 03:42:11.089457  787862 command_runner.go:130] > # log_size_max = -1
	I0116 03:42:11.089466  787862 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 03:42:11.089472  787862 command_runner.go:130] > # log_to_journald = false
	I0116 03:42:11.089483  787862 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 03:42:11.089490  787862 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 03:42:11.089496  787862 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 03:42:11.089503  787862 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 03:42:11.089512  787862 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 03:42:11.089518  787862 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 03:42:11.089528  787862 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 03:42:11.089725  787862 command_runner.go:130] > # read_only = false
	I0116 03:42:11.089741  787862 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 03:42:11.089750  787862 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 03:42:11.089755  787862 command_runner.go:130] > # live configuration reload.
	I0116 03:42:11.089765  787862 command_runner.go:130] > # log_level = "info"
	I0116 03:42:11.089772  787862 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 03:42:11.089782  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:42:11.089807  787862 command_runner.go:130] > # log_filter = ""
	I0116 03:42:11.089821  787862 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 03:42:11.089829  787862 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 03:42:11.089834  787862 command_runner.go:130] > # separated by comma.
	I0116 03:42:11.089842  787862 command_runner.go:130] > # uid_mappings = ""
	I0116 03:42:11.089850  787862 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 03:42:11.089861  787862 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 03:42:11.089873  787862 command_runner.go:130] > # separated by comma.
	I0116 03:42:11.089882  787862 command_runner.go:130] > # gid_mappings = ""
	I0116 03:42:11.089890  787862 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 03:42:11.089900  787862 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:42:11.089908  787862 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:42:11.089914  787862 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 03:42:11.089921  787862 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 03:42:11.089932  787862 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:42:11.089945  787862 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:42:11.089954  787862 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 03:42:11.089962  787862 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 03:42:11.089975  787862 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 03:42:11.089983  787862 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 03:42:11.089991  787862 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 03:42:11.089998  787862 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 03:42:11.090006  787862 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 03:42:11.090013  787862 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 03:42:11.090029  787862 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 03:42:11.090034  787862 command_runner.go:130] > # drop_infra_ctr = true
	I0116 03:42:11.090046  787862 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 03:42:11.090053  787862 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 03:42:11.090065  787862 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 03:42:11.090074  787862 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 03:42:11.090085  787862 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 03:42:11.090098  787862 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 03:42:11.090108  787862 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 03:42:11.090117  787862 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 03:42:11.090124  787862 command_runner.go:130] > # pinns_path = ""
	I0116 03:42:11.090132  787862 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 03:42:11.090150  787862 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 03:42:11.090161  787862 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 03:42:11.090172  787862 command_runner.go:130] > # default_runtime = "runc"
	I0116 03:42:11.090183  787862 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 03:42:11.090193  787862 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 03:42:11.090207  787862 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 03:42:11.090213  787862 command_runner.go:130] > # creation as a file is not desired either.
	I0116 03:42:11.090224  787862 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 03:42:11.090233  787862 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 03:42:11.090245  787862 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 03:42:11.090250  787862 command_runner.go:130] > # ]
	I0116 03:42:11.090262  787862 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 03:42:11.090270  787862 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 03:42:11.090281  787862 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 03:42:11.090289  787862 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 03:42:11.090294  787862 command_runner.go:130] > #
	I0116 03:42:11.090303  787862 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 03:42:11.090310  787862 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 03:42:11.090325  787862 command_runner.go:130] > #  runtime_type = "oci"
	I0116 03:42:11.090331  787862 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 03:42:11.090337  787862 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 03:42:11.090342  787862 command_runner.go:130] > #  allowed_annotations = []
	I0116 03:42:11.090347  787862 command_runner.go:130] > # Where:
	I0116 03:42:11.090357  787862 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 03:42:11.090367  787862 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 03:42:11.090374  787862 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 03:42:11.090382  787862 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 03:42:11.090387  787862 command_runner.go:130] > #   in $PATH.
	I0116 03:42:11.090403  787862 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 03:42:11.090409  787862 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 03:42:11.090417  787862 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 03:42:11.090421  787862 command_runner.go:130] > #   state.
	I0116 03:42:11.090429  787862 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 03:42:11.090436  787862 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 03:42:11.090447  787862 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 03:42:11.090454  787862 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 03:42:11.090471  787862 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 03:42:11.090484  787862 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 03:42:11.090490  787862 command_runner.go:130] > #   The currently recognized values are:
	I0116 03:42:11.090501  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 03:42:11.090509  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 03:42:11.090517  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 03:42:11.090526  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 03:42:11.090547  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 03:42:11.090558  787862 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 03:42:11.090567  787862 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 03:42:11.090578  787862 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 03:42:11.090584  787862 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 03:42:11.090592  787862 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 03:42:11.090598  787862 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0116 03:42:11.090603  787862 command_runner.go:130] > runtime_type = "oci"
	I0116 03:42:11.090610  787862 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 03:42:11.090622  787862 command_runner.go:130] > runtime_config_path = ""
	I0116 03:42:11.090821  787862 command_runner.go:130] > monitor_path = ""
	I0116 03:42:11.090861  787862 command_runner.go:130] > monitor_cgroup = ""
	I0116 03:42:11.090874  787862 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 03:42:11.090912  787862 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 03:42:11.090926  787862 command_runner.go:130] > # running containers
	I0116 03:42:11.090933  787862 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 03:42:11.090941  787862 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 03:42:11.090954  787862 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 03:42:11.090962  787862 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 03:42:11.090971  787862 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 03:42:11.090977  787862 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 03:42:11.090986  787862 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 03:42:11.090991  787862 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 03:42:11.091000  787862 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 03:42:11.091005  787862 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 03:42:11.091013  787862 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 03:42:11.091022  787862 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 03:42:11.091030  787862 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 03:42:11.091042  787862 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 03:42:11.091054  787862 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 03:42:11.091065  787862 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 03:42:11.091076  787862 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 03:42:11.091089  787862 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 03:42:11.091096  787862 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 03:42:11.091107  787862 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 03:42:11.091115  787862 command_runner.go:130] > # Example:
	I0116 03:42:11.091121  787862 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 03:42:11.091129  787862 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 03:42:11.091137  787862 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 03:42:11.091146  787862 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 03:42:11.091150  787862 command_runner.go:130] > # cpuset = 0
	I0116 03:42:11.091155  787862 command_runner.go:130] > # cpushares = "0-1"
	I0116 03:42:11.091162  787862 command_runner.go:130] > # Where:
	I0116 03:42:11.091168  787862 command_runner.go:130] > # The workload name is workload-type.
	I0116 03:42:11.091177  787862 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 03:42:11.091186  787862 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 03:42:11.091193  787862 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 03:42:11.091206  787862 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 03:42:11.091216  787862 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 03:42:11.091220  787862 command_runner.go:130] > # 
	I0116 03:42:11.091231  787862 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 03:42:11.091238  787862 command_runner.go:130] > #
	I0116 03:42:11.091247  787862 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 03:42:11.091255  787862 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 03:42:11.091262  787862 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 03:42:11.091274  787862 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 03:42:11.091281  787862 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 03:42:11.091289  787862 command_runner.go:130] > [crio.image]
	I0116 03:42:11.091296  787862 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 03:42:11.091305  787862 command_runner.go:130] > # default_transport = "docker://"
	I0116 03:42:11.091312  787862 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 03:42:11.091325  787862 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:42:11.091331  787862 command_runner.go:130] > # global_auth_file = ""
	I0116 03:42:11.091340  787862 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 03:42:11.091348  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:42:11.091356  787862 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 03:42:11.091367  787862 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 03:42:11.091374  787862 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:42:11.091384  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:42:11.091389  787862 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 03:42:11.091398  787862 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 03:42:11.091406  787862 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 03:42:11.091413  787862 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 03:42:11.091421  787862 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 03:42:11.091429  787862 command_runner.go:130] > # pause_command = "/pause"
	I0116 03:42:11.091436  787862 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 03:42:11.091447  787862 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 03:42:11.091455  787862 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 03:42:11.091465  787862 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 03:42:11.091472  787862 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 03:42:11.091480  787862 command_runner.go:130] > # signature_policy = ""
	I0116 03:42:11.091487  787862 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 03:42:11.091495  787862 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 03:42:11.091504  787862 command_runner.go:130] > # changing them here.
	I0116 03:42:11.091512  787862 command_runner.go:130] > # insecure_registries = [
	I0116 03:42:11.091516  787862 command_runner.go:130] > # ]
	I0116 03:42:11.091524  787862 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 03:42:11.091533  787862 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 03:42:11.091541  787862 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 03:42:11.091550  787862 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 03:42:11.091556  787862 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 03:42:11.091566  787862 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 03:42:11.091570  787862 command_runner.go:130] > # CNI plugins.
	I0116 03:42:11.091575  787862 command_runner.go:130] > [crio.network]
	I0116 03:42:11.091582  787862 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 03:42:11.091591  787862 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 03:42:11.091596  787862 command_runner.go:130] > # cni_default_network = ""
	I0116 03:42:11.091605  787862 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 03:42:11.091613  787862 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 03:42:11.091619  787862 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 03:42:11.091626  787862 command_runner.go:130] > # plugin_dirs = [
	I0116 03:42:11.091633  787862 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 03:42:11.091640  787862 command_runner.go:130] > # ]
	I0116 03:42:11.091647  787862 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 03:42:11.091652  787862 command_runner.go:130] > [crio.metrics]
	I0116 03:42:11.091658  787862 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 03:42:11.091665  787862 command_runner.go:130] > # enable_metrics = false
	I0116 03:42:11.091671  787862 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 03:42:11.091679  787862 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 03:42:11.091687  787862 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 03:42:11.091697  787862 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 03:42:11.091704  787862 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 03:42:11.091712  787862 command_runner.go:130] > # metrics_collectors = [
	I0116 03:42:11.091871  787862 command_runner.go:130] > # 	"operations",
	I0116 03:42:11.091888  787862 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 03:42:11.091895  787862 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 03:42:11.091900  787862 command_runner.go:130] > # 	"operations_errors",
	I0116 03:42:11.091908  787862 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 03:42:11.091913  787862 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 03:42:11.091921  787862 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 03:42:11.091926  787862 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 03:42:11.091937  787862 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 03:42:11.091944  787862 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 03:42:11.091951  787862 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 03:42:11.091957  787862 command_runner.go:130] > # 	"containers_oom_total",
	I0116 03:42:11.091964  787862 command_runner.go:130] > # 	"containers_oom",
	I0116 03:42:11.091969  787862 command_runner.go:130] > # 	"processes_defunct",
	I0116 03:42:11.091974  787862 command_runner.go:130] > # 	"operations_total",
	I0116 03:42:11.091982  787862 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 03:42:11.091988  787862 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 03:42:11.091995  787862 command_runner.go:130] > # 	"operations_errors_total",
	I0116 03:42:11.092002  787862 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 03:42:11.092011  787862 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 03:42:11.092018  787862 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 03:42:11.092027  787862 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 03:42:11.092032  787862 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 03:42:11.092040  787862 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 03:42:11.092044  787862 command_runner.go:130] > # ]
	I0116 03:42:11.092050  787862 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 03:42:11.092058  787862 command_runner.go:130] > # metrics_port = 9090
	I0116 03:42:11.092083  787862 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 03:42:11.092093  787862 command_runner.go:130] > # metrics_socket = ""
	I0116 03:42:11.092099  787862 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 03:42:11.092109  787862 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 03:42:11.092117  787862 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 03:42:11.092125  787862 command_runner.go:130] > # certificate on any modification event.
	I0116 03:42:11.092130  787862 command_runner.go:130] > # metrics_cert = ""
	I0116 03:42:11.092137  787862 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 03:42:11.092146  787862 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 03:42:11.092159  787862 command_runner.go:130] > # metrics_key = ""
	I0116 03:42:11.092169  787862 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 03:42:11.092175  787862 command_runner.go:130] > [crio.tracing]
	I0116 03:42:11.092184  787862 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 03:42:11.092189  787862 command_runner.go:130] > # enable_tracing = false
	I0116 03:42:11.092198  787862 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 03:42:11.092203  787862 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 03:42:11.092209  787862 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 03:42:11.092215  787862 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 03:42:11.092225  787862 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 03:42:11.092230  787862 command_runner.go:130] > [crio.stats]
	I0116 03:42:11.092238  787862 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 03:42:11.092248  787862 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 03:42:11.092253  787862 command_runner.go:130] > # stats_collection_period = 0
	I0116 03:42:11.094096  787862 command_runner.go:130] ! time="2024-01-16 03:42:11.081598878Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0116 03:42:11.094124  787862 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 03:42:11.094548  787862 cni.go:84] Creating CNI manager for ""
	I0116 03:42:11.094567  787862 cni.go:136] 1 nodes found, recommending kindnet
	I0116 03:42:11.094597  787862 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:42:11.094627  787862 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-741097 NodeName:multinode-741097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:42:11.094788  787862 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-741097"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:42:11.094862  787862 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-741097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-741097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:42:11.094943  787862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:42:11.104184  787862 command_runner.go:130] > kubeadm
	I0116 03:42:11.104200  787862 command_runner.go:130] > kubectl
	I0116 03:42:11.104206  787862 command_runner.go:130] > kubelet
	I0116 03:42:11.105333  787862 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:42:11.105404  787862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:42:11.115643  787862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0116 03:42:11.136601  787862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:42:11.158553  787862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0116 03:42:11.178459  787862 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0116 03:42:11.182750  787862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:42:11.195330  787862 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097 for IP: 192.168.58.2
	I0116 03:42:11.195361  787862 certs.go:190] acquiring lock for shared ca certs: {Name:mkc1cd6c1048e37282c341d17731487c267a60dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:11.195487  787862 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key
	I0116 03:42:11.195523  787862 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key
	I0116 03:42:11.195567  787862 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.key
	I0116 03:42:11.195576  787862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.crt with IP's: []
	I0116 03:42:11.506671  787862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.crt ...
	I0116 03:42:11.506701  787862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.crt: {Name:mk2c7bf8b3efc7e1681acfc8bdb1904c1e561833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:11.506887  787862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.key ...
	I0116 03:42:11.506898  787862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.key: {Name:mkc312bb49d25dbd579e7a4d341baaa317028a81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:11.506987  787862 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.key.cee25041
	I0116 03:42:11.507003  787862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 03:42:12.091740  787862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.crt.cee25041 ...
	I0116 03:42:12.091778  787862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.crt.cee25041: {Name:mk84dec4797245dbad7cd54e6ab136848a35cf64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:12.091958  787862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.key.cee25041 ...
	I0116 03:42:12.091972  787862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.key.cee25041: {Name:mk454b6a90130f7d816d9784fd8860b4152ce95a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:12.092059  787862 certs.go:337] copying /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.crt
	I0116 03:42:12.092157  787862 certs.go:341] copying /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.key
	I0116 03:42:12.092225  787862 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.key
	I0116 03:42:12.092241  787862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.crt with IP's: []
	I0116 03:42:12.490645  787862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.crt ...
	I0116 03:42:12.490675  787862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.crt: {Name:mkebb82f338a4861240390cee1b70d8b21b5be53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:12.490860  787862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.key ...
	I0116 03:42:12.490874  787862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.key: {Name:mk0b4ae03718d80a5c7b35faf6215bcb85135030 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:12.490962  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 03:42:12.490981  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 03:42:12.490994  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 03:42:12.491010  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 03:42:12.491021  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:42:12.491038  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:42:12.491054  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:42:12.491071  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:42:12.491120  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621.pem (1338 bytes)
	W0116 03:42:12.491162  787862 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621_empty.pem, impossibly tiny 0 bytes
	I0116 03:42:12.491177  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:42:12.491202  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:42:12.491230  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:42:12.491266  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem (1675 bytes)
	I0116 03:42:12.491316  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem (1708 bytes)
	I0116 03:42:12.491347  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621.pem -> /usr/share/ca-certificates/724621.pem
	I0116 03:42:12.491366  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem -> /usr/share/ca-certificates/7246212.pem
	I0116 03:42:12.491377  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:42:12.491975  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:42:12.518955  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:42:12.545241  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:42:12.571590  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 03:42:12.597369  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:42:12.623099  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 03:42:12.649217  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:42:12.675157  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 03:42:12.701709  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621.pem --> /usr/share/ca-certificates/724621.pem (1338 bytes)
	I0116 03:42:12.728080  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem --> /usr/share/ca-certificates/7246212.pem (1708 bytes)
	I0116 03:42:12.754187  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:42:12.780198  787862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:42:12.800002  787862 ssh_runner.go:195] Run: openssl version
	I0116 03:42:12.806268  787862 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0116 03:42:12.806617  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:42:12.820970  787862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:42:12.825111  787862 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 03:21 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:42:12.825131  787862 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 03:21 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:42:12.825178  787862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:42:12.832970  787862 command_runner.go:130] > b5213941
	I0116 03:42:12.833338  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:42:12.844328  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/724621.pem && ln -fs /usr/share/ca-certificates/724621.pem /etc/ssl/certs/724621.pem"
	I0116 03:42:12.855626  787862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/724621.pem
	I0116 03:42:12.859912  787862 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 03:27 /usr/share/ca-certificates/724621.pem
	I0116 03:42:12.860330  787862 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 03:27 /usr/share/ca-certificates/724621.pem
	I0116 03:42:12.860414  787862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/724621.pem
	I0116 03:42:12.868551  787862 command_runner.go:130] > 51391683
	I0116 03:42:12.868946  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/724621.pem /etc/ssl/certs/51391683.0"
	I0116 03:42:12.879937  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7246212.pem && ln -fs /usr/share/ca-certificates/7246212.pem /etc/ssl/certs/7246212.pem"
	I0116 03:42:12.891014  787862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7246212.pem
	I0116 03:42:12.895309  787862 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 03:27 /usr/share/ca-certificates/7246212.pem
	I0116 03:42:12.895576  787862 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 03:27 /usr/share/ca-certificates/7246212.pem
	I0116 03:42:12.895629  787862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7246212.pem
	I0116 03:42:12.903205  787862 command_runner.go:130] > 3ec20f2e
	I0116 03:42:12.903599  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7246212.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:42:12.914767  787862 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:42:12.918833  787862 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:42:12.918876  787862 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:42:12.918914  787862 kubeadm.go:404] StartCluster: {Name:multinode-741097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-741097 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:42:12.918988  787862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 03:42:12.919046  787862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 03:42:12.957290  787862 cri.go:89] found id: ""
	I0116 03:42:12.957356  787862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:42:12.966091  787862 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0116 03:42:12.966152  787862 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0116 03:42:12.967247  787862 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0116 03:42:12.967337  787862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:42:12.977236  787862 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 03:42:12.977296  787862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:42:12.987498  787862 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 03:42:12.987564  787862 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 03:42:12.987579  787862 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 03:42:12.987589  787862 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:42:12.987618  787862 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:42:12.987649  787862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 03:42:13.087815  787862 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 03:42:13.087822  787862 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 03:42:13.166983  787862 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:42:13.167016  787862 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:42:28.228277  787862 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:42:28.228323  787862 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0116 03:42:28.228362  787862 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:42:28.228372  787862 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 03:42:28.228461  787862 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 03:42:28.228470  787862 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0116 03:42:28.228522  787862 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0116 03:42:28.228530  787862 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0116 03:42:28.228562  787862 kubeadm.go:322] OS: Linux
	I0116 03:42:28.228571  787862 command_runner.go:130] > OS: Linux
	I0116 03:42:28.228617  787862 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 03:42:28.228626  787862 command_runner.go:130] > CGROUPS_CPU: enabled
	I0116 03:42:28.228671  787862 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 03:42:28.228680  787862 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0116 03:42:28.228723  787862 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 03:42:28.228733  787862 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0116 03:42:28.228777  787862 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 03:42:28.228786  787862 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0116 03:42:28.228830  787862 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 03:42:28.228838  787862 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0116 03:42:28.228883  787862 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 03:42:28.228892  787862 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0116 03:42:28.228934  787862 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0116 03:42:28.228943  787862 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0116 03:42:28.228987  787862 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0116 03:42:28.228996  787862 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0116 03:42:28.229038  787862 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0116 03:42:28.229047  787862 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0116 03:42:28.229113  787862 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:42:28.229128  787862 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:42:28.229216  787862 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:42:28.229227  787862 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:42:28.229313  787862 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:42:28.229321  787862 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:42:28.229379  787862 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:42:28.231347  787862 out.go:204]   - Generating certificates and keys ...
	I0116 03:42:28.229464  787862 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:42:28.231431  787862 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:42:28.231448  787862 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 03:42:28.231506  787862 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:42:28.231515  787862 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 03:42:28.231577  787862 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 03:42:28.231585  787862 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 03:42:28.231638  787862 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 03:42:28.231647  787862 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0116 03:42:28.231703  787862 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 03:42:28.231714  787862 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0116 03:42:28.231760  787862 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 03:42:28.231769  787862 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0116 03:42:28.231818  787862 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 03:42:28.231827  787862 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0116 03:42:28.231937  787862 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-741097] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 03:42:28.231946  787862 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-741097] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 03:42:28.231994  787862 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 03:42:28.232002  787862 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0116 03:42:28.232128  787862 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-741097] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 03:42:28.232140  787862 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-741097] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 03:42:28.232200  787862 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 03:42:28.232208  787862 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 03:42:28.232267  787862 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 03:42:28.232276  787862 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 03:42:28.232316  787862 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 03:42:28.232325  787862 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0116 03:42:28.232376  787862 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:42:28.232385  787862 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:42:28.232437  787862 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:42:28.232446  787862 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:42:28.232503  787862 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:42:28.232513  787862 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:42:28.232572  787862 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:42:28.232581  787862 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:42:28.232632  787862 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:42:28.232641  787862 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:42:28.232716  787862 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:42:28.232724  787862 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:42:28.232785  787862 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:42:28.235313  787862 out.go:204]   - Booting up control plane ...
	I0116 03:42:28.232890  787862 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:42:28.235434  787862 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:42:28.235422  787862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:42:28.235536  787862 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:42:28.235545  787862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:42:28.235619  787862 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:42:28.235628  787862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:42:28.235743  787862 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:42:28.235751  787862 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:42:28.235852  787862 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:42:28.235866  787862 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:42:28.235912  787862 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 03:42:28.235920  787862 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:42:28.236112  787862 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:42:28.236121  787862 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:42:28.236210  787862 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002567 seconds
	I0116 03:42:28.236220  787862 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.002567 seconds
	I0116 03:42:28.236345  787862 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:42:28.236354  787862 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:42:28.236491  787862 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:42:28.236501  787862 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:42:28.236559  787862 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:42:28.236568  787862 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:42:28.236751  787862 kubeadm.go:322] [mark-control-plane] Marking the node multinode-741097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:42:28.236761  787862 command_runner.go:130] > [mark-control-plane] Marking the node multinode-741097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:42:28.236822  787862 kubeadm.go:322] [bootstrap-token] Using token: d2jjxp.uke76iznjwba4gl2
	I0116 03:42:28.240254  787862 out.go:204]   - Configuring RBAC rules ...
	I0116 03:42:28.236908  787862 command_runner.go:130] > [bootstrap-token] Using token: d2jjxp.uke76iznjwba4gl2
	I0116 03:42:28.240360  787862 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:42:28.240375  787862 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:42:28.240454  787862 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:42:28.240470  787862 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:42:28.240600  787862 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:42:28.240609  787862 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:42:28.240727  787862 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:42:28.240750  787862 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:42:28.240862  787862 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:42:28.240870  787862 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:42:28.240959  787862 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:42:28.240969  787862 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:42:28.241076  787862 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:42:28.241084  787862 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:42:28.241125  787862 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:42:28.241133  787862 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 03:42:28.241175  787862 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:42:28.241183  787862 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 03:42:28.241188  787862 kubeadm.go:322] 
	I0116 03:42:28.241244  787862 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:42:28.241252  787862 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0116 03:42:28.241256  787862 kubeadm.go:322] 
	I0116 03:42:28.241328  787862 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:42:28.241336  787862 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0116 03:42:28.241341  787862 kubeadm.go:322] 
	I0116 03:42:28.241367  787862 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:42:28.241375  787862 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0116 03:42:28.241430  787862 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:42:28.241440  787862 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:42:28.241488  787862 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:42:28.241496  787862 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:42:28.241500  787862 kubeadm.go:322] 
	I0116 03:42:28.241551  787862 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:42:28.241559  787862 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0116 03:42:28.241564  787862 kubeadm.go:322] 
	I0116 03:42:28.241608  787862 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:42:28.241616  787862 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:42:28.241621  787862 kubeadm.go:322] 
	I0116 03:42:28.241670  787862 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:42:28.241678  787862 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0116 03:42:28.241748  787862 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:42:28.241755  787862 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:42:28.241819  787862 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:42:28.241831  787862 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:42:28.241835  787862 kubeadm.go:322] 
	I0116 03:42:28.241922  787862 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:42:28.241931  787862 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:42:28.242003  787862 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:42:28.242012  787862 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0116 03:42:28.242016  787862 kubeadm.go:322] 
	I0116 03:42:28.242095  787862 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token d2jjxp.uke76iznjwba4gl2 \
	I0116 03:42:28.242103  787862 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token d2jjxp.uke76iznjwba4gl2 \
	I0116 03:42:28.242199  787862 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 \
	I0116 03:42:28.242207  787862 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 \
	I0116 03:42:28.242226  787862 kubeadm.go:322] 	--control-plane 
	I0116 03:42:28.242234  787862 command_runner.go:130] > 	--control-plane 
	I0116 03:42:28.242239  787862 kubeadm.go:322] 
	I0116 03:42:28.242318  787862 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:42:28.242327  787862 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:42:28.242332  787862 kubeadm.go:322] 
	I0116 03:42:28.242409  787862 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token d2jjxp.uke76iznjwba4gl2 \
	I0116 03:42:28.242420  787862 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token d2jjxp.uke76iznjwba4gl2 \
	I0116 03:42:28.242516  787862 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 
	I0116 03:42:28.242528  787862 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 
	I0116 03:42:28.242545  787862 cni.go:84] Creating CNI manager for ""
	I0116 03:42:28.242555  787862 cni.go:136] 1 nodes found, recommending kindnet
	I0116 03:42:28.246131  787862 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 03:42:28.248092  787862 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:42:28.253184  787862 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 03:42:28.253202  787862 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0116 03:42:28.253209  787862 command_runner.go:130] > Device: 3ah/58d	Inode: 1308531     Links: 1
	I0116 03:42:28.253217  787862 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:42:28.253224  787862 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0116 03:42:28.253230  787862 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0116 03:42:28.253236  787862 command_runner.go:130] > Change: 2024-01-16 03:20:36.520574610 +0000
	I0116 03:42:28.253242  787862 command_runner.go:130] >  Birth: 2024-01-16 03:20:36.476574257 +0000
	I0116 03:42:28.253579  787862 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 03:42:28.253591  787862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:42:28.275413  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:42:29.122535  787862 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0116 03:42:29.128490  787862 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0116 03:42:29.136093  787862 command_runner.go:130] > serviceaccount/kindnet created
	I0116 03:42:29.147503  787862 command_runner.go:130] > daemonset.apps/kindnet created
	I0116 03:42:29.152943  787862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:42:29.153073  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:29.153162  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-741097 minikube.k8s.io/updated_at=2024_01_16T03_42_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:29.313904  787862 command_runner.go:130] > node/multinode-741097 labeled
	I0116 03:42:29.315143  787862 command_runner.go:130] > -16
	I0116 03:42:29.315167  787862 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0116 03:42:29.315189  787862 ops.go:34] apiserver oom_adj: -16
	I0116 03:42:29.315270  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:29.439749  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:29.815356  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:29.906160  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:30.316094  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:30.405237  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:30.815882  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:30.902905  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:31.315396  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:31.399131  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:31.815398  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:31.905185  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:32.315784  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:32.405165  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:32.815424  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:32.898749  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:33.315402  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:33.404769  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:33.815348  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:33.907903  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:34.315491  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:34.415467  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:34.816134  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:34.907981  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:35.315407  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:35.407090  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:35.815572  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:35.904129  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:36.315735  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:36.412506  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:36.816177  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:36.901985  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:37.315415  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:37.398410  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:37.816217  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:37.898063  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:38.315367  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:38.405427  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:38.815788  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:38.903777  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:39.315385  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:39.410455  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:39.815616  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:39.902763  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:40.315393  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:40.404374  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:40.816158  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:40.907297  787862 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 03:42:41.316024  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:42:41.475603  787862 command_runner.go:130] > NAME      SECRETS   AGE
	I0116 03:42:41.475622  787862 command_runner.go:130] > default   0         0s
	I0116 03:42:41.478215  787862 kubeadm.go:1088] duration metric: took 12.325185893s to wait for elevateKubeSystemPrivileges.
	I0116 03:42:41.478246  787862 kubeadm.go:406] StartCluster complete in 28.559335784s
	I0116 03:42:41.478264  787862 settings.go:142] acquiring lock: {Name:mk09c1af0296e0da2e97c553b187ecf4aec5fda4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:41.478332  787862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:42:41.479042  787862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/kubeconfig: {Name:mk79a070d6b32850c1522eb5f09a1fb050b71442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:42:41.479534  787862 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:42:41.479789  787862 kapi.go:59] client config for multinode-741097: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.key", CAFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:42:41.480241  787862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:42:41.480575  787862 config.go:182] Loaded profile config "multinode-741097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:42:41.480684  787862 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:42:41.480759  787862 addons.go:69] Setting storage-provisioner=true in profile "multinode-741097"
	I0116 03:42:41.480780  787862 addons.go:234] Setting addon storage-provisioner=true in "multinode-741097"
	I0116 03:42:41.480844  787862 host.go:66] Checking if "multinode-741097" exists ...
	I0116 03:42:41.481290  787862 cli_runner.go:164] Run: docker container inspect multinode-741097 --format={{.State.Status}}
	I0116 03:42:41.482118  787862 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:42:41.482135  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:41.482145  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:41.482152  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:41.482359  787862 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 03:42:41.482761  787862 addons.go:69] Setting default-storageclass=true in profile "multinode-741097"
	I0116 03:42:41.482787  787862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-741097"
	I0116 03:42:41.483088  787862 cli_runner.go:164] Run: docker container inspect multinode-741097 --format={{.State.Status}}
	I0116 03:42:41.521496  787862 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:42:41.521775  787862 kapi.go:59] client config for multinode-741097: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.key", CAFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:42:41.522055  787862 addons.go:234] Setting addon default-storageclass=true in "multinode-741097"
	I0116 03:42:41.522082  787862 host.go:66] Checking if "multinode-741097" exists ...
	I0116 03:42:41.522520  787862 cli_runner.go:164] Run: docker container inspect multinode-741097 --format={{.State.Status}}
	I0116 03:42:41.542780  787862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:42:41.544775  787862 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:42:41.544798  787862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:42:41.544866  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:41.565246  787862 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:42:41.565268  787862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:42:41.565330  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:42:41.591497  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa Username:docker}
	I0116 03:42:41.591526  787862 round_trippers.go:574] Response Status: 200 OK in 109 milliseconds
	I0116 03:42:41.591540  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:41.591550  787862 round_trippers.go:580]     Content-Length: 291
	I0116 03:42:41.591557  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:41 GMT
	I0116 03:42:41.591563  787862 round_trippers.go:580]     Audit-Id: a01369ea-765c-46f0-8d6a-ea315d6b1b63
	I0116 03:42:41.591569  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:41.591575  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:41.591581  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:41.591587  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:41.591615  787862 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3c5529d-e02d-46a5-965b-a2d49fe27004","resourceVersion":"369","creationTimestamp":"2024-01-16T03:42:28Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 03:42:41.591994  787862 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3c5529d-e02d-46a5-965b-a2d49fe27004","resourceVersion":"369","creationTimestamp":"2024-01-16T03:42:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 03:42:41.596222  787862 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:42:41.596229  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:41.596238  787862 round_trippers.go:473]     Content-Type: application/json
	I0116 03:42:41.596244  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:41.596251  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:41.605647  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa Username:docker}
	I0116 03:42:41.634711  787862 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0116 03:42:41.634735  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:41.634753  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:41.634760  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:41.634767  787862 round_trippers.go:580]     Content-Length: 291
	I0116 03:42:41.634773  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:41 GMT
	I0116 03:42:41.634782  787862 round_trippers.go:580]     Audit-Id: 6d5fbacb-4dfe-41cd-9c7c-97b2ad17d108
	I0116 03:42:41.634797  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:41.634803  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:41.637273  787862 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3c5529d-e02d-46a5-965b-a2d49fe27004","resourceVersion":"372","creationTimestamp":"2024-01-16T03:42:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 03:42:41.740613  787862 command_runner.go:130] > apiVersion: v1
	I0116 03:42:41.740671  787862 command_runner.go:130] > data:
	I0116 03:42:41.740690  787862 command_runner.go:130] >   Corefile: |
	I0116 03:42:41.740706  787862 command_runner.go:130] >     .:53 {
	I0116 03:42:41.740720  787862 command_runner.go:130] >         errors
	I0116 03:42:41.740747  787862 command_runner.go:130] >         health {
	I0116 03:42:41.740768  787862 command_runner.go:130] >            lameduck 5s
	I0116 03:42:41.740785  787862 command_runner.go:130] >         }
	I0116 03:42:41.740803  787862 command_runner.go:130] >         ready
	I0116 03:42:41.740821  787862 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 03:42:41.740843  787862 command_runner.go:130] >            pods insecure
	I0116 03:42:41.740865  787862 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 03:42:41.740883  787862 command_runner.go:130] >            ttl 30
	I0116 03:42:41.740901  787862 command_runner.go:130] >         }
	I0116 03:42:41.740917  787862 command_runner.go:130] >         prometheus :9153
	I0116 03:42:41.740941  787862 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 03:42:41.740962  787862 command_runner.go:130] >            max_concurrent 1000
	I0116 03:42:41.740979  787862 command_runner.go:130] >         }
	I0116 03:42:41.740994  787862 command_runner.go:130] >         cache 30
	I0116 03:42:41.741008  787862 command_runner.go:130] >         loop
	I0116 03:42:41.741022  787862 command_runner.go:130] >         reload
	I0116 03:42:41.741050  787862 command_runner.go:130] >         loadbalance
	I0116 03:42:41.741072  787862 command_runner.go:130] >     }
	I0116 03:42:41.741087  787862 command_runner.go:130] > kind: ConfigMap
	I0116 03:42:41.741100  787862 command_runner.go:130] > metadata:
	I0116 03:42:41.741116  787862 command_runner.go:130] >   creationTimestamp: "2024-01-16T03:42:28Z"
	I0116 03:42:41.741130  787862 command_runner.go:130] >   name: coredns
	I0116 03:42:41.741155  787862 command_runner.go:130] >   namespace: kube-system
	I0116 03:42:41.741178  787862 command_runner.go:130] >   resourceVersion: "263"
	I0116 03:42:41.741196  787862 command_runner.go:130] >   uid: cb691f4b-3a7a-4e5c-a374-adf0ea654380
	I0116 03:42:41.741357  787862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:42:41.792371  787862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:42:41.828325  787862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:42:41.982536  787862 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:42:41.982605  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:41.982630  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:41.982648  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:42.040553  787862 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I0116 03:42:42.040619  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:42.040641  787862 round_trippers.go:580]     Audit-Id: 37884cef-a39a-46f9-b3d4-690f2d98269c
	I0116 03:42:42.040661  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:42.040692  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:42.040715  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:42.040735  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:42.040755  787862 round_trippers.go:580]     Content-Length: 291
	I0116 03:42:42.040774  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:42 GMT
	I0116 03:42:42.048936  787862 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3c5529d-e02d-46a5-965b-a2d49fe27004","resourceVersion":"384","creationTimestamp":"2024-01-16T03:42:28Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 03:42:42.049121  787862 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-741097" context rescaled to 1 replicas
	I0116 03:42:42.049174  787862 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 03:42:42.052794  787862 out.go:177] * Verifying Kubernetes components...
	I0116 03:42:42.054976  787862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:42:42.379602  787862 command_runner.go:130] > configmap/coredns replaced
	I0116 03:42:42.384156  787862 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0116 03:42:42.404496  787862 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0116 03:42:42.407473  787862 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0116 03:42:42.407538  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:42.407562  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:42.407581  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:42.435173  787862 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0116 03:42:42.435234  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:42.435255  787862 round_trippers.go:580]     Audit-Id: 9a22f62f-7a4d-49f6-ad0f-9d1eb4de5dcd
	I0116 03:42:42.435273  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:42.435310  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:42.435332  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:42.435352  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:42.435370  787862 round_trippers.go:580]     Content-Length: 1273
	I0116 03:42:42.435388  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:42 GMT
	I0116 03:42:42.439160  787862 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"standard","uid":"5c173682-0321-40cb-a350-7197ab7ea5ea","resourceVersion":"396","creationTimestamp":"2024-01-16T03:42:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T03:42:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0116 03:42:42.439624  787862 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5c173682-0321-40cb-a350-7197ab7ea5ea","resourceVersion":"396","creationTimestamp":"2024-01-16T03:42:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T03:42:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 03:42:42.439703  787862 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0116 03:42:42.439739  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:42.439764  787862 round_trippers.go:473]     Content-Type: application/json
	I0116 03:42:42.439785  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:42.439804  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:42.446319  787862 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:42:42.446371  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:42.446393  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:42.446411  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:42.446429  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:42.446460  787862 round_trippers.go:580]     Content-Length: 1220
	I0116 03:42:42.446482  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:42 GMT
	I0116 03:42:42.446502  787862 round_trippers.go:580]     Audit-Id: 9faa99c3-4a8f-4f79-9911-cf41168f7468
	I0116 03:42:42.446521  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:42.446571  787862 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5c173682-0321-40cb-a350-7197ab7ea5ea","resourceVersion":"396","creationTimestamp":"2024-01-16T03:42:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T03:42:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 03:42:42.564935  787862 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0116 03:42:42.570719  787862 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0116 03:42:42.581320  787862 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 03:42:42.588917  787862 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 03:42:42.599260  787862 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0116 03:42:42.609130  787862 command_runner.go:130] > pod/storage-provisioner created
	I0116 03:42:42.617057  787862 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0116 03:42:42.615005  787862 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:42:42.619507  787862 addons.go:505] enable addons completed in 1.138819661s: enabled=[default-storageclass storage-provisioner]
	I0116 03:42:42.619840  787862 kapi.go:59] client config for multinode-741097: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.key", CAFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:42:42.620153  787862 node_ready.go:35] waiting up to 6m0s for node "multinode-741097" to be "Ready" ...
	I0116 03:42:42.620250  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:42.620262  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:42.620271  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:42.620278  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:42.623135  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:42.623150  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:42.623158  787862 round_trippers.go:580]     Audit-Id: 06365589-d35a-4489-b43c-b7231d265984
	I0116 03:42:42.623166  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:42.623172  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:42.623179  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:42.623185  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:42.623191  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:42 GMT
	I0116 03:42:42.623310  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:43.120366  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:43.120389  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:43.120398  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:43.120406  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:43.122742  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:43.122760  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:43.122768  787862 round_trippers.go:580]     Audit-Id: 8abbf18e-a8fd-436f-bdef-8e772a15253c
	I0116 03:42:43.122775  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:43.122781  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:43.122787  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:43.122793  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:43.122800  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:43 GMT
	I0116 03:42:43.122921  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:43.621020  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:43.621045  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:43.621055  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:43.621062  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:43.623399  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:43.623416  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:43.623425  787862 round_trippers.go:580]     Audit-Id: 73aa232e-3110-40ec-8542-bfc221272f10
	I0116 03:42:43.623431  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:43.623437  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:43.623443  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:43.623450  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:43.623456  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:43 GMT
	I0116 03:42:43.623594  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:44.121276  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:44.121302  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:44.121312  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:44.121320  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:44.123737  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:44.123812  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:44.123827  787862 round_trippers.go:580]     Audit-Id: 87214330-4971-4c51-98c0-617211c78185
	I0116 03:42:44.123835  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:44.123841  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:44.123847  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:44.123871  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:44.123878  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:44 GMT
	I0116 03:42:44.124008  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:44.620396  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:44.620423  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:44.620432  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:44.620440  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:44.622854  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:44.622871  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:44.622879  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:44.622885  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:44.622892  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:44 GMT
	I0116 03:42:44.622898  787862 round_trippers.go:580]     Audit-Id: e83342d1-0e92-44b8-9788-6c183f6a73f3
	I0116 03:42:44.622904  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:44.622911  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:44.623127  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:44.623532  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:42:45.121008  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:45.121035  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:45.121045  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:45.121052  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:45.123724  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:45.123752  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:45.123762  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:45.123769  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:45 GMT
	I0116 03:42:45.123775  787862 round_trippers.go:580]     Audit-Id: 6cb35e28-02b1-4e62-bd2f-825fdc3655d3
	I0116 03:42:45.123782  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:45.123788  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:45.123795  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:45.123927  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:45.620477  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:45.620499  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:45.620510  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:45.620519  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:45.622825  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:45.622842  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:45.622850  787862 round_trippers.go:580]     Audit-Id: 8f4f1d5c-6b6e-4970-9cf1-adf0ee7a9a4c
	I0116 03:42:45.622857  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:45.622863  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:45.622870  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:45.622876  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:45.622882  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:45 GMT
	I0116 03:42:45.623033  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:46.120822  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:46.120850  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:46.120859  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:46.120866  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:46.123191  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:46.123210  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:46.123218  787862 round_trippers.go:580]     Audit-Id: 522c38a1-c487-4d98-a3d4-cb319d3be115
	I0116 03:42:46.123224  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:46.123230  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:46.123236  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:46.123243  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:46.123252  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:46 GMT
	I0116 03:42:46.123412  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:46.620391  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:46.620411  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:46.620435  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:46.620442  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:46.622877  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:46.622900  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:46.622909  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:46.622915  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:46.622922  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:46.622928  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:46 GMT
	I0116 03:42:46.622934  787862 round_trippers.go:580]     Audit-Id: e439f188-835a-44a0-a124-7d7c13cc5863
	I0116 03:42:46.622943  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:46.623110  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:47.121277  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:47.121301  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:47.121310  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:47.121318  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:47.123685  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:47.123706  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:47.123715  787862 round_trippers.go:580]     Audit-Id: aecf44fd-0e61-48bb-a30d-b470d78728a4
	I0116 03:42:47.123721  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:47.123728  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:47.123734  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:47.123743  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:47.123749  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:47 GMT
	I0116 03:42:47.123944  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:47.124367  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:42:47.621118  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:47.621139  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:47.621149  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:47.621156  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:47.623398  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:47.623414  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:47.623422  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:47.623429  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:47.623435  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:47.623441  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:47.623448  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:47 GMT
	I0116 03:42:47.623454  787862 round_trippers.go:580]     Audit-Id: b207c108-65f3-4786-af8b-8a7ddcaf4fe9
	I0116 03:42:47.623596  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:48.121138  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:48.121161  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:48.121170  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:48.121177  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:48.123586  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:48.123612  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:48.123621  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:48.123628  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:48.123634  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:48.123641  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:48.123648  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:48 GMT
	I0116 03:42:48.123655  787862 round_trippers.go:580]     Audit-Id: 5fb3c4b1-e4a9-41b9-a7ec-b3bb15f42f88
	I0116 03:42:48.123885  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:48.620863  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:48.620886  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:48.620896  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:48.620903  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:48.623274  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:48.623301  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:48.623310  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:48.623316  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:48.623323  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:48 GMT
	I0116 03:42:48.623330  787862 round_trippers.go:580]     Audit-Id: f05c6a0a-2920-4d2b-998c-385091dbac37
	I0116 03:42:48.623342  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:48.623348  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:48.623604  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:49.120473  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:49.120495  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:49.120504  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:49.120512  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:49.122952  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:49.122972  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:49.122980  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:49.122987  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:49.122994  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:49 GMT
	I0116 03:42:49.123000  787862 round_trippers.go:580]     Audit-Id: 4cc121a0-ebae-485a-9ec6-ca73a7bbdaef
	I0116 03:42:49.123006  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:49.123012  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:49.123169  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:49.620380  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:49.620404  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:49.620414  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:49.620421  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:49.622767  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:49.622784  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:49.622793  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:49.622799  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:49.622806  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:49.622812  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:49 GMT
	I0116 03:42:49.622818  787862 round_trippers.go:580]     Audit-Id: 5a55d548-167b-4760-ad2f-a852de1928cc
	I0116 03:42:49.622824  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:49.622940  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:49.623330  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:42:50.120849  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:50.120871  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:50.120883  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:50.120890  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:50.123228  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:50.123245  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:50.123253  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:50.123260  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:50 GMT
	I0116 03:42:50.123267  787862 round_trippers.go:580]     Audit-Id: c2ada3ed-18a9-4091-a8cd-3e83b1f474e9
	I0116 03:42:50.123273  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:50.123279  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:50.123286  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:50.123413  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:50.620383  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:50.620401  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:50.620411  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:50.620418  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:50.622820  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:50.622843  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:50.622852  787862 round_trippers.go:580]     Audit-Id: 4c9dfebb-d617-48e7-b05f-ab080a6624c3
	I0116 03:42:50.622863  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:50.622869  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:50.622875  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:50.622883  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:50.622890  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:50 GMT
	I0116 03:42:50.623188  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:51.120499  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:51.120519  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:51.120529  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:51.120537  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:51.123000  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:51.123023  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:51.123033  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:51.123044  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:51.123051  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:51 GMT
	I0116 03:42:51.123060  787862 round_trippers.go:580]     Audit-Id: a82f0665-4904-4f89-8271-452f92feb966
	I0116 03:42:51.123067  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:51.123073  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:51.123213  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:51.621274  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:51.621297  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:51.621307  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:51.621314  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:51.623735  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:51.623757  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:51.623765  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:51.623772  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:51.623778  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:51.623784  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:51.623790  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:51 GMT
	I0116 03:42:51.623801  787862 round_trippers.go:580]     Audit-Id: 7cd8b6e7-df64-4309-b74b-7b515c26c5a1
	I0116 03:42:51.623975  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:51.624395  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:42:52.121020  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:52.121043  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:52.121056  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:52.121064  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:52.123392  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:52.123420  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:52.123428  787862 round_trippers.go:580]     Audit-Id: 8acf07f9-88cf-4db2-8eb5-9baa3a01f109
	I0116 03:42:52.123447  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:52.123460  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:52.123467  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:52.123474  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:52.123484  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:52 GMT
	I0116 03:42:52.123784  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:52.620559  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:52.620589  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:52.620599  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:52.620605  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:52.622960  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:52.622980  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:52.622988  787862 round_trippers.go:580]     Audit-Id: 58de4025-9de2-4f8e-8900-7b1b71589282
	I0116 03:42:52.622995  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:52.623001  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:52.623008  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:52.623014  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:52.623021  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:52 GMT
	I0116 03:42:52.623507  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:53.120366  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:53.120389  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:53.120399  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:53.120406  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:53.122779  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:53.122796  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:53.122805  787862 round_trippers.go:580]     Audit-Id: 7823342c-a7e6-4e22-8731-5c718485de83
	I0116 03:42:53.122811  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:53.122817  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:53.122823  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:53.122829  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:53.122836  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:53 GMT
	I0116 03:42:53.122947  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:53.620877  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:53.620900  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:53.620909  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:53.620922  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:53.623334  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:53.623356  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:53.623365  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:53 GMT
	I0116 03:42:53.623371  787862 round_trippers.go:580]     Audit-Id: 1972140d-d356-4abb-a618-332310765e9e
	I0116 03:42:53.623378  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:53.623385  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:53.623396  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:53.623403  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:53.623506  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:54.121115  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:54.121140  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:54.121150  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:54.121157  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:54.123607  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:54.123626  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:54.123634  787862 round_trippers.go:580]     Audit-Id: e61af4d0-11c7-40be-b746-e1c99c3ea287
	I0116 03:42:54.123640  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:54.123646  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:54.123652  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:54.123658  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:54.123665  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:54 GMT
	I0116 03:42:54.123763  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:54.124188  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:42:54.620372  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:54.620393  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:54.620403  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:54.620410  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:54.622750  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:54.622765  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:54.622773  787862 round_trippers.go:580]     Audit-Id: 4d8522ce-ecca-41cd-84ca-fb08a3f3637a
	I0116 03:42:54.622779  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:54.622785  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:54.622791  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:54.622797  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:54.622804  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:54 GMT
	I0116 03:42:54.622898  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:55.120906  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:55.120932  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:55.120942  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:55.120949  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:55.123319  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:55.123344  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:55.123352  787862 round_trippers.go:580]     Audit-Id: f4da1f4a-0f1b-4504-aa4b-36d32854a530
	I0116 03:42:55.123359  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:55.123365  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:55.123372  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:55.123378  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:55.123388  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:55 GMT
	I0116 03:42:55.123617  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:55.621308  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:55.621329  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:55.621340  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:55.621347  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:55.623697  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:55.623715  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:55.623723  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:55.623730  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:55.623736  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:55.623742  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:55.623749  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:55 GMT
	I0116 03:42:55.623759  787862 round_trippers.go:580]     Audit-Id: 2780a40a-d555-404d-b5ce-b2af24439d6a
	I0116 03:42:55.624129  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:56.121236  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:56.121262  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:56.121273  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:56.121280  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:56.123618  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:56.123638  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:56.123647  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:56.123653  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:56.123659  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:56.123665  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:56 GMT
	I0116 03:42:56.123671  787862 round_trippers.go:580]     Audit-Id: 5813f107-6b7a-4de8-85b1-3b1071b6002b
	I0116 03:42:56.123677  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:56.123796  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:56.124225  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:42:56.620567  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:56.620590  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:56.620600  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:56.620607  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:56.623422  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:56.623440  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:56.623449  787862 round_trippers.go:580]     Audit-Id: 057226d2-5a5f-4d5d-b4ad-0619b8903ef7
	I0116 03:42:56.623456  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:56.623462  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:56.623468  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:56.623474  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:56.623481  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:56 GMT
	I0116 03:42:56.624224  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:57.120899  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:57.120924  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:57.120934  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:57.120942  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:57.123509  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:57.123534  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:57.123543  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:57 GMT
	I0116 03:42:57.123550  787862 round_trippers.go:580]     Audit-Id: 63d4ad20-440c-4d6c-be91-f6e71d01c12a
	I0116 03:42:57.123556  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:57.123562  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:57.123568  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:57.123574  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:57.123793  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:57.620841  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:57.620862  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:57.620871  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:57.620879  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:57.623154  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:57.623179  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:57.623188  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:57.623194  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:57.623201  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:57 GMT
	I0116 03:42:57.623209  787862 round_trippers.go:580]     Audit-Id: f3f28879-1dec-4cef-86c5-3a627bc8df83
	I0116 03:42:57.623218  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:57.623227  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:57.623495  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:58.120844  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:58.120865  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:58.120875  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:58.120882  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:58.123349  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:58.123369  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:58.123377  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:58.123385  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:58.123391  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:58.123398  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:58 GMT
	I0116 03:42:58.123406  787862 round_trippers.go:580]     Audit-Id: 2d31fe02-bd22-465f-80cf-e53abc88030b
	I0116 03:42:58.123413  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:58.123642  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:58.620467  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:58.620492  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:58.620502  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:58.620509  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:58.622849  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:58.622874  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:58.622887  787862 round_trippers.go:580]     Audit-Id: a7592bba-73cc-4f98-8611-757ee4838f7f
	I0116 03:42:58.622894  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:58.622900  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:58.622906  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:58.622916  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:58.622923  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:58 GMT
	I0116 03:42:58.623022  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:58.623446  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:42:59.121197  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:59.121221  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:59.121232  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:59.121239  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:59.123602  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:59.123620  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:59.123628  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:59 GMT
	I0116 03:42:59.123705  787862 round_trippers.go:580]     Audit-Id: 7885277e-095b-4dcc-b3d5-b99efa26d2d7
	I0116 03:42:59.123712  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:59.123718  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:59.123724  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:59.123730  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:59.123854  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:42:59.620683  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:42:59.620712  787862 round_trippers.go:469] Request Headers:
	I0116 03:42:59.620724  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:42:59.620732  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:42:59.623189  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:42:59.623210  787862 round_trippers.go:577] Response Headers:
	I0116 03:42:59.623219  787862 round_trippers.go:580]     Audit-Id: 970e091a-3e16-4e7b-a246-3503060fd4b9
	I0116 03:42:59.623226  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:42:59.623233  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:42:59.623239  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:42:59.623245  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:42:59.623255  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:42:59 GMT
	I0116 03:42:59.623360  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:00.120931  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:00.120967  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:00.121010  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:00.121021  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:00.123750  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:00.123776  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:00.123787  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:00.123794  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:00.123800  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:00 GMT
	I0116 03:43:00.123806  787862 round_trippers.go:580]     Audit-Id: e65ca0a8-5a39-43c0-ab86-813648392a99
	I0116 03:43:00.123813  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:00.123819  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:00.124196  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:00.620430  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:00.620461  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:00.620470  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:00.620478  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:00.622806  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:00.622827  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:00.622844  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:00.622852  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:00.622858  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:00 GMT
	I0116 03:43:00.622865  787862 round_trippers.go:580]     Audit-Id: f620cb51-a876-47bc-a987-ed93a0452706
	I0116 03:43:00.622871  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:00.622878  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:00.623196  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:00.623616  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:43:01.120669  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:01.120692  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:01.120702  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:01.120710  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:01.123008  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:01.123026  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:01.123034  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:01.123040  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:01.123047  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:01.123053  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:01.123059  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:01 GMT
	I0116 03:43:01.123065  787862 round_trippers.go:580]     Audit-Id: 791f845e-5a8b-43ec-ad69-9bfa984cdc84
	I0116 03:43:01.123219  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:01.621127  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:01.621152  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:01.621162  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:01.621169  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:01.623545  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:01.623564  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:01.623573  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:01.623579  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:01.623585  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:01.623592  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:01.623598  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:01 GMT
	I0116 03:43:01.623604  787862 round_trippers.go:580]     Audit-Id: a99aa1dc-96cd-4aa0-a33d-73fce55f7666
	I0116 03:43:01.623742  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:02.121305  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:02.121329  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:02.121339  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:02.121346  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:02.124611  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:02.124639  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:02.124648  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:02.124655  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:02.124661  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:02.124668  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:02.124678  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:02 GMT
	I0116 03:43:02.124688  787862 round_trippers.go:580]     Audit-Id: 98004f91-4e85-4ed8-a1af-3209034bbccb
	I0116 03:43:02.124823  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:02.620981  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:02.621004  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:02.621014  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:02.621021  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:02.623289  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:02.623309  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:02.623316  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:02.623323  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:02.623329  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:02.623335  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:02.623343  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:02 GMT
	I0116 03:43:02.623350  787862 round_trippers.go:580]     Audit-Id: 4642c286-dc17-42de-b9e2-d0716e4657bb
	I0116 03:43:02.623633  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:02.624031  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:43:03.121322  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:03.121346  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:03.121355  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:03.121363  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:03.124005  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:03.124029  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:03.124037  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:03.124044  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:03 GMT
	I0116 03:43:03.124051  787862 round_trippers.go:580]     Audit-Id: 2faecfb2-5b22-48fd-bb47-3d3737ada4f1
	I0116 03:43:03.124057  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:03.124076  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:03.124086  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:03.124235  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:03.620396  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:03.620421  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:03.620430  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:03.620442  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:03.623548  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:03.623573  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:03.623581  787862 round_trippers.go:580]     Audit-Id: 17f380e6-1195-4869-904b-571e46262859
	I0116 03:43:03.623588  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:03.623594  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:03.623600  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:03.623609  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:03.623619  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:03 GMT
	I0116 03:43:03.623736  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:04.120919  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:04.120942  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:04.120952  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:04.120959  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:04.123413  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:04.123429  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:04.123437  787862 round_trippers.go:580]     Audit-Id: 8d055730-bb5d-4b55-8bd0-e4df5d18ecd9
	I0116 03:43:04.123444  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:04.123450  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:04.123456  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:04.123462  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:04.123468  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:04 GMT
	I0116 03:43:04.123591  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:04.620399  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:04.620425  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:04.620439  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:04.620447  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:04.622591  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:04.622614  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:04.622622  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:04.622634  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:04.622641  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:04.622651  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:04 GMT
	I0116 03:43:04.622658  787862 round_trippers.go:580]     Audit-Id: 8248ffca-11c8-41a7-8cb1-1a3e85cb1348
	I0116 03:43:04.622668  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:04.622779  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:05.120635  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:05.120659  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:05.120669  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:05.120676  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:05.123137  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:05.123160  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:05.123169  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:05.123175  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:05.123181  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:05.123190  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:05 GMT
	I0116 03:43:05.123196  787862 round_trippers.go:580]     Audit-Id: 417fb5eb-d9b2-4294-b8be-3e7707b2d54f
	I0116 03:43:05.123205  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:05.123530  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:05.123945  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:43:05.621059  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:05.621080  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:05.621090  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:05.621099  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:05.623477  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:05.623499  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:05.623507  787862 round_trippers.go:580]     Audit-Id: bdacf179-5368-419d-9dcb-8fd6fa17001e
	I0116 03:43:05.623514  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:05.623521  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:05.623528  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:05.623536  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:05.623543  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:05 GMT
	I0116 03:43:05.623780  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:06.120933  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:06.120961  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:06.120971  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:06.120979  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:06.123335  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:06.123354  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:06.123362  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:06.123368  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:06 GMT
	I0116 03:43:06.123374  787862 round_trippers.go:580]     Audit-Id: 47b561e1-d23c-4357-b2a1-dd0f351f8e3d
	I0116 03:43:06.123381  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:06.123387  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:06.123396  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:06.123523  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:06.621287  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:06.621312  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:06.621322  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:06.621330  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:06.628660  787862 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 03:43:06.628683  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:06.628697  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:06 GMT
	I0116 03:43:06.628703  787862 round_trippers.go:580]     Audit-Id: 68152852-acad-4749-a316-d5645a9d2fc9
	I0116 03:43:06.628709  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:06.628715  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:06.628721  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:06.628728  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:06.629130  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:07.120383  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:07.120408  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:07.120418  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:07.120425  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:07.122737  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:07.122759  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:07.122767  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:07 GMT
	I0116 03:43:07.122774  787862 round_trippers.go:580]     Audit-Id: c843c2ed-62ab-47e1-ae0e-b8ff8a373cb6
	I0116 03:43:07.122780  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:07.122786  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:07.122795  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:07.122806  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:07.123154  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:07.620390  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:07.620417  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:07.620428  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:07.620440  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:07.622719  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:07.622736  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:07.622744  787862 round_trippers.go:580]     Audit-Id: 401a1ab2-262d-45c1-a626-abaa0c40ed0d
	I0116 03:43:07.622751  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:07.622757  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:07.622763  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:07.622770  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:07.622776  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:07 GMT
	I0116 03:43:07.622931  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:07.623349  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:43:08.121113  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:08.121137  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:08.121148  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:08.121155  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:08.123649  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:08.123672  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:08.123680  787862 round_trippers.go:580]     Audit-Id: 8dea6199-a910-4cc6-bb43-f43e6e7c40f0
	I0116 03:43:08.123687  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:08.123694  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:08.123700  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:08.123707  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:08.123713  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:08 GMT
	I0116 03:43:08.123829  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:08.620883  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:08.620905  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:08.620915  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:08.620922  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:08.623212  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:08.623232  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:08.623240  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:08.623246  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:08 GMT
	I0116 03:43:08.623252  787862 round_trippers.go:580]     Audit-Id: fe34cf00-1ba1-4ecc-9126-ec47994dc83c
	I0116 03:43:08.623259  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:08.623270  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:08.623280  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:08.623631  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:09.120375  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:09.120400  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:09.120410  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:09.120417  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:09.122913  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:09.122932  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:09.122941  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:09.122947  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:09.122953  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:09 GMT
	I0116 03:43:09.122959  787862 round_trippers.go:580]     Audit-Id: c2e027d0-91ff-4e9d-a0fb-3c3921eb759f
	I0116 03:43:09.122965  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:09.122971  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:09.123117  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:09.621109  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:09.621133  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:09.621143  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:09.621150  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:09.623442  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:09.623460  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:09.623475  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:09.623482  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:09 GMT
	I0116 03:43:09.623490  787862 round_trippers.go:580]     Audit-Id: 82bc29b8-c629-4d25-b213-a826c2380227
	I0116 03:43:09.623502  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:09.623508  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:09.623514  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:09.623618  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:09.624014  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:43:10.120424  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:10.120457  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:10.120467  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:10.120474  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:10.122874  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:10.122899  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:10.122907  787862 round_trippers.go:580]     Audit-Id: 9ae9476c-8408-4526-b58c-24d89280b126
	I0116 03:43:10.122913  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:10.122920  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:10.122926  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:10.122932  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:10.122942  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:10 GMT
	I0116 03:43:10.123093  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:10.621270  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:10.621292  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:10.621303  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:10.621310  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:10.623683  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:10.623702  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:10.623710  787862 round_trippers.go:580]     Audit-Id: ca1fc9cb-3a49-48a0-aa48-9943a081a7a5
	I0116 03:43:10.623717  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:10.623723  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:10.623729  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:10.623736  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:10.623742  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:10 GMT
	I0116 03:43:10.623845  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:11.121201  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:11.121227  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:11.121236  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:11.121244  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:11.123588  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:11.123605  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:11.123613  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:11.123620  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:11.123626  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:11.123633  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:11.123639  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:11 GMT
	I0116 03:43:11.123645  787862 round_trippers.go:580]     Audit-Id: 0e026db7-4509-4ee9-88f8-f56f7cc9f6f9
	I0116 03:43:11.123812  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:11.620608  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:11.620635  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:11.620644  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:11.620652  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:11.622909  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:11.622927  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:11.622935  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:11.622941  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:11.622947  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:11.622953  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:11.622960  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:11 GMT
	I0116 03:43:11.622966  787862 round_trippers.go:580]     Audit-Id: 2b83d179-ad8a-4c3c-876f-6a4a90b5b20e
	I0116 03:43:11.623206  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:12.121232  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:12.121256  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:12.121267  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:12.121274  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:12.123777  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:12.123798  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:12.123807  787862 round_trippers.go:580]     Audit-Id: feca75ce-db19-49d6-bf19-aaf0775dd217
	I0116 03:43:12.123814  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:12.123820  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:12.123828  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:12.123834  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:12.123842  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:12 GMT
	I0116 03:43:12.123982  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:12.124404  787862 node_ready.go:58] node "multinode-741097" has status "Ready":"False"
	I0116 03:43:12.620792  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:12.620814  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:12.620823  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:12.620830  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:12.623162  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:12.623187  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:12.623195  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:12.623203  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:12 GMT
	I0116 03:43:12.623209  787862 round_trippers.go:580]     Audit-Id: ca12a022-5ab0-42b6-a14c-1bab24b8498f
	I0116 03:43:12.623215  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:12.623221  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:12.623227  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:12.623370  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"334","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 03:43:13.120392  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:13.120413  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:13.120422  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:13.120429  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:13.122827  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:13.122851  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:13.122860  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:13.122867  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:13 GMT
	I0116 03:43:13.122879  787862 round_trippers.go:580]     Audit-Id: 89c34a1e-56e8-4e70-9b7b-e2cabf0a9dfd
	I0116 03:43:13.122886  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:13.122892  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:13.122901  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:13.123061  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:13.123482  787862 node_ready.go:49] node "multinode-741097" has status "Ready":"True"
	I0116 03:43:13.123500  787862 node_ready.go:38] duration metric: took 30.503329004s waiting for node "multinode-741097" to be "Ready" ...
	I0116 03:43:13.123510  787862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:43:13.123574  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:43:13.123585  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:13.123592  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:13.123599  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:13.126839  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:13.126859  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:13.126867  787862 round_trippers.go:580]     Audit-Id: d0e88c41-c308-47c7-9573-c537ce801b82
	I0116 03:43:13.126874  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:13.126880  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:13.126887  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:13.126893  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:13.126900  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:13 GMT
	I0116 03:43:13.127747  787862 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2z5xs","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3","resourceVersion":"428","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c85c0644-575e-40d8-9912-1bb96f25128f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85c0644-575e-40d8-9912-1bb96f25128f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0116 03:43:13.131855  787862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2z5xs" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:13.131972  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2z5xs
	I0116 03:43:13.131985  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:13.132015  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:13.132024  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:13.134363  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:13.134392  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:13.134399  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:13.134406  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:13 GMT
	I0116 03:43:13.134412  787862 round_trippers.go:580]     Audit-Id: aba074f0-84d4-45b5-9ec5-1dff6bfb477d
	I0116 03:43:13.134421  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:13.134432  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:13.134438  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:13.134653  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2z5xs","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3","resourceVersion":"428","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c85c0644-575e-40d8-9912-1bb96f25128f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85c0644-575e-40d8-9912-1bb96f25128f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0116 03:43:13.135123  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:13.135139  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:13.135147  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:13.135156  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:13.137296  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:13.137315  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:13.137322  787862 round_trippers.go:580]     Audit-Id: 0e54cf17-8cb8-41f0-b4fa-799ee58a8d1d
	I0116 03:43:13.137328  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:13.137334  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:13.137343  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:13.137354  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:13.137360  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:13 GMT
	I0116 03:43:13.137580  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:13.632729  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2z5xs
	I0116 03:43:13.632750  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:13.632760  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:13.632768  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:13.635266  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:13.635355  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:13.635369  787862 round_trippers.go:580]     Audit-Id: 59f8458c-401e-4f62-a63b-b6484c0a4320
	I0116 03:43:13.635378  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:13.635384  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:13.635390  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:13.635404  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:13.635412  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:13 GMT
	I0116 03:43:13.635543  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2z5xs","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3","resourceVersion":"440","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c85c0644-575e-40d8-9912-1bb96f25128f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85c0644-575e-40d8-9912-1bb96f25128f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0116 03:43:13.636117  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:13.636135  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:13.636143  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:13.636150  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:13.638233  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:13.638249  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:13.638257  787862 round_trippers.go:580]     Audit-Id: c26c35d1-4b36-4bbc-8b37-7c2e92b1c8f9
	I0116 03:43:13.638263  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:13.638269  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:13.638275  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:13.638281  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:13.638287  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:13 GMT
	I0116 03:43:13.638398  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:14.132453  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2z5xs
	I0116 03:43:14.132475  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.132485  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.132492  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.135044  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.135071  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.135079  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.135087  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.135094  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.135101  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.135108  787862 round_trippers.go:580]     Audit-Id: 49ffe68d-df1d-400b-a75c-10320c0115b0
	I0116 03:43:14.135114  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.135230  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2z5xs","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3","resourceVersion":"440","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c85c0644-575e-40d8-9912-1bb96f25128f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85c0644-575e-40d8-9912-1bb96f25128f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0116 03:43:14.135777  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:14.135793  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.135801  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.135808  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.137915  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.137935  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.137943  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.137949  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.137956  787862 round_trippers.go:580]     Audit-Id: 09b97be8-53e0-436c-a16d-aaa7cefc010a
	I0116 03:43:14.137962  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.137975  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.137985  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.138122  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:14.632505  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2z5xs
	I0116 03:43:14.632530  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.632539  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.632546  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.635174  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.635196  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.635205  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.635211  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.635217  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.635224  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.635231  787862 round_trippers.go:580]     Audit-Id: 26e38f11-58d5-40c7-9f72-11e1b12159d9
	I0116 03:43:14.635240  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.635565  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2z5xs","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3","resourceVersion":"444","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c85c0644-575e-40d8-9912-1bb96f25128f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85c0644-575e-40d8-9912-1bb96f25128f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0116 03:43:14.636113  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:14.636128  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.636137  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.636144  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.638361  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.638386  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.638395  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.638402  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.638408  787862 round_trippers.go:580]     Audit-Id: 27236be4-1aa5-4e4a-a904-c598ca3f255a
	I0116 03:43:14.638414  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.638421  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.638427  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.638519  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:14.638908  787862 pod_ready.go:92] pod "coredns-5dd5756b68-2z5xs" in "kube-system" namespace has status "Ready":"True"
	I0116 03:43:14.638920  787862 pod_ready.go:81] duration metric: took 1.507037703s waiting for pod "coredns-5dd5756b68-2z5xs" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.638930  787862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.638994  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-741097
	I0116 03:43:14.639000  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.639007  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.639013  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.641192  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.641212  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.641219  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.641226  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.641233  787862 round_trippers.go:580]     Audit-Id: b0d5c141-927b-4666-b210-9da5f06185a8
	I0116 03:43:14.641245  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.641251  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.641258  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.641429  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-741097","namespace":"kube-system","uid":"e88b8e2a-3aa3-4ddc-93aa-e8119b68034e","resourceVersion":"318","creationTimestamp":"2024-01-16T03:42:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ae5a83c589ec43e0eae7b90d3c11eb5e","kubernetes.io/config.mirror":"ae5a83c589ec43e0eae7b90d3c11eb5e","kubernetes.io/config.seen":"2024-01-16T03:42:28.150313420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0116 03:43:14.641837  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:14.641852  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.641860  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.641867  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.643749  787862 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:43:14.643771  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.643779  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.643785  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.643791  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.643797  787862 round_trippers.go:580]     Audit-Id: 8025a186-89ae-479f-b89f-0db5409fd77c
	I0116 03:43:14.643803  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.643809  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.643976  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:14.644361  787862 pod_ready.go:92] pod "etcd-multinode-741097" in "kube-system" namespace has status "Ready":"True"
	I0116 03:43:14.644379  787862 pod_ready.go:81] duration metric: took 5.440917ms waiting for pod "etcd-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.644392  787862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.644453  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-741097
	I0116 03:43:14.644462  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.644470  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.644477  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.646443  787862 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:43:14.646462  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.646473  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.646480  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.646487  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.646493  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.646499  787862 round_trippers.go:580]     Audit-Id: 1e7f9f4a-49d0-46c4-8920-75cb47cb8ae9
	I0116 03:43:14.646506  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.646796  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-741097","namespace":"kube-system","uid":"15831bc8-c5f4-4288-adf2-c5af42d05ebb","resourceVersion":"328","creationTimestamp":"2024-01-16T03:42:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ac7ff780fba56f1234d32f5a3c8a2527","kubernetes.io/config.mirror":"ac7ff780fba56f1234d32f5a3c8a2527","kubernetes.io/config.seen":"2024-01-16T03:42:20.538989474Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0116 03:43:14.647347  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:14.647366  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.647381  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.647388  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.649577  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.649597  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.649605  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.649611  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.649618  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.649624  787862 round_trippers.go:580]     Audit-Id: 66b065c1-2c1e-4154-9a8b-7e8479ece06b
	I0116 03:43:14.649634  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.649644  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.649781  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:14.650177  787862 pod_ready.go:92] pod "kube-apiserver-multinode-741097" in "kube-system" namespace has status "Ready":"True"
	I0116 03:43:14.650189  787862 pod_ready.go:81] duration metric: took 5.787874ms waiting for pod "kube-apiserver-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.650199  787862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.650275  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-741097
	I0116 03:43:14.650280  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.650287  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.650294  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.652474  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.652497  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.652505  787862 round_trippers.go:580]     Audit-Id: bbe66965-7828-41ba-93c1-2d8bf41a00d8
	I0116 03:43:14.652511  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.652518  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.652527  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.652533  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.652540  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.652700  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-741097","namespace":"kube-system","uid":"848561d9-4f18-415a-afb3-a1697ab9738a","resourceVersion":"323","creationTimestamp":"2024-01-16T03:42:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"245252063dc61aa82cf10d0a0b149c59","kubernetes.io/config.mirror":"245252063dc61aa82cf10d0a0b149c59","kubernetes.io/config.seen":"2024-01-16T03:42:28.150319747Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0116 03:43:14.653228  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:14.653237  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.653244  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.653251  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.655516  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.655532  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.655539  787862 round_trippers.go:580]     Audit-Id: 33c1b13e-7808-4c92-b9c2-e3165f1be3c5
	I0116 03:43:14.655552  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.655558  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.655564  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.655570  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.655580  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.655699  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:14.656113  787862 pod_ready.go:92] pod "kube-controller-manager-multinode-741097" in "kube-system" namespace has status "Ready":"True"
	I0116 03:43:14.656126  787862 pod_ready.go:81] duration metric: took 5.919682ms waiting for pod "kube-controller-manager-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.656136  787862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cm64c" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.656196  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cm64c
	I0116 03:43:14.656201  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.656210  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.656222  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.658411  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.658432  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.658439  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.658446  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.658453  787862 round_trippers.go:580]     Audit-Id: 6b189944-0033-406a-945b-7dd3be0969bc
	I0116 03:43:14.658459  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.658465  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.658493  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.658647  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cm64c","generateName":"kube-proxy-","namespace":"kube-system","uid":"07b12aa4-20cf-4db6-8c2b-80085bc219a5","resourceVersion":"411","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6c0da6ec-1f97-48d4-bc73-49dc78d5a834","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6c0da6ec-1f97-48d4-bc73-49dc78d5a834\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0116 03:43:14.721416  787862 request.go:629] Waited for 62.192443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:14.721489  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:14.721497  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.721506  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.721514  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.724033  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.724109  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.724132  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.724145  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.724152  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.724158  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.724177  787862 round_trippers.go:580]     Audit-Id: dea9289d-bdd9-45ce-a4d8-d83d710b1019
	I0116 03:43:14.724187  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.724309  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:14.724731  787862 pod_ready.go:92] pod "kube-proxy-cm64c" in "kube-system" namespace has status "Ready":"True"
	I0116 03:43:14.724750  787862 pod_ready.go:81] duration metric: took 68.607357ms waiting for pod "kube-proxy-cm64c" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.724761  787862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:14.921181  787862 request.go:629] Waited for 196.3369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-741097
	I0116 03:43:14.921276  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-741097
	I0116 03:43:14.921291  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:14.921301  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:14.921330  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:14.924130  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:14.924206  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:14.924227  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:14.924251  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:14 GMT
	I0116 03:43:14.924268  787862 round_trippers.go:580]     Audit-Id: c8c86411-10eb-4735-a34c-72084b54961e
	I0116 03:43:14.924276  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:14.924282  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:14.924288  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:14.924442  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-741097","namespace":"kube-system","uid":"1cd5ce84-044b-4867-be0b-45f71f0946b9","resourceVersion":"320","creationTimestamp":"2024-01-16T03:42:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8aac5e6028eb628a89e364f5d125fbc0","kubernetes.io/config.mirror":"8aac5e6028eb628a89e364f5d125fbc0","kubernetes.io/config.seen":"2024-01-16T03:42:28.150320674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0116 03:43:15.121237  787862 request.go:629] Waited for 196.328949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:15.121300  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:43:15.121312  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:15.121322  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:15.121332  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:15.123849  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:15.123874  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:15.123883  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:15.123889  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:15.123895  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:15.123902  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:15.123908  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:15 GMT
	I0116 03:43:15.123918  787862 round_trippers.go:580]     Audit-Id: cad65173-17b9-4bc4-9caf-cf1d506c3f89
	I0116 03:43:15.124052  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:43:15.124498  787862 pod_ready.go:92] pod "kube-scheduler-multinode-741097" in "kube-system" namespace has status "Ready":"True"
	I0116 03:43:15.124520  787862 pod_ready.go:81] duration metric: took 399.751611ms waiting for pod "kube-scheduler-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:43:15.124532  787862 pod_ready.go:38] duration metric: took 2.001006097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:43:15.124545  787862 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:43:15.124610  787862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:43:15.136353  787862 command_runner.go:130] > 1259
	I0116 03:43:15.137732  787862 api_server.go:72] duration metric: took 33.088506331s to wait for apiserver process to appear ...
	I0116 03:43:15.137790  787862 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:43:15.137827  787862 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0116 03:43:15.147501  787862 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0116 03:43:15.147587  787862 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0116 03:43:15.147606  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:15.147632  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:15.147641  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:15.148865  787862 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:43:15.148884  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:15.148892  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:15.148899  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:15.148922  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:15.148934  787862 round_trippers.go:580]     Content-Length: 264
	I0116 03:43:15.148941  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:15 GMT
	I0116 03:43:15.148947  787862 round_trippers.go:580]     Audit-Id: 042f723b-cd70-42ac-9c63-45e58a53def7
	I0116 03:43:15.148957  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:15.149135  787862 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0116 03:43:15.149225  787862 api_server.go:141] control plane version: v1.28.4
	I0116 03:43:15.149245  787862 api_server.go:131] duration metric: took 11.442676ms to wait for apiserver health ...
	I0116 03:43:15.149254  787862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:43:15.320558  787862 request.go:629] Waited for 171.229813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:43:15.320647  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:43:15.320658  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:15.320668  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:15.320679  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:15.323883  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:15.323909  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:15.323919  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:15 GMT
	I0116 03:43:15.323925  787862 round_trippers.go:580]     Audit-Id: 8654c527-5c04-4a6a-906e-2fa6ab804cd1
	I0116 03:43:15.323931  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:15.323943  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:15.323952  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:15.323959  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:15.324657  787862 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2z5xs","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3","resourceVersion":"444","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c85c0644-575e-40d8-9912-1bb96f25128f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85c0644-575e-40d8-9912-1bb96f25128f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0116 03:43:15.327090  787862 system_pods.go:59] 8 kube-system pods found
	I0116 03:43:15.327120  787862 system_pods.go:61] "coredns-5dd5756b68-2z5xs" [ed28cba5-d03c-4872-8d43-ac2b9cbde1c3] Running
	I0116 03:43:15.327127  787862 system_pods.go:61] "etcd-multinode-741097" [e88b8e2a-3aa3-4ddc-93aa-e8119b68034e] Running
	I0116 03:43:15.327132  787862 system_pods.go:61] "kindnet-g8srb" [f8484e68-06a7-4a2b-868a-d81bd13a3656] Running
	I0116 03:43:15.327141  787862 system_pods.go:61] "kube-apiserver-multinode-741097" [15831bc8-c5f4-4288-adf2-c5af42d05ebb] Running
	I0116 03:43:15.327149  787862 system_pods.go:61] "kube-controller-manager-multinode-741097" [848561d9-4f18-415a-afb3-a1697ab9738a] Running
	I0116 03:43:15.327155  787862 system_pods.go:61] "kube-proxy-cm64c" [07b12aa4-20cf-4db6-8c2b-80085bc219a5] Running
	I0116 03:43:15.327166  787862 system_pods.go:61] "kube-scheduler-multinode-741097" [1cd5ce84-044b-4867-be0b-45f71f0946b9] Running
	I0116 03:43:15.327171  787862 system_pods.go:61] "storage-provisioner" [a6a472e5-20d5-4ad7-8c69-cfdffeda3c59] Running
	I0116 03:43:15.327177  787862 system_pods.go:74] duration metric: took 177.917723ms to wait for pod list to return data ...
	I0116 03:43:15.327187  787862 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:43:15.520516  787862 request.go:629] Waited for 193.241977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 03:43:15.520572  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 03:43:15.520577  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:15.520586  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:15.520597  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:15.523045  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:15.523064  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:15.523075  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:15 GMT
	I0116 03:43:15.523097  787862 round_trippers.go:580]     Audit-Id: 6573e8bc-d0bd-497b-8c53-372619ceedbb
	I0116 03:43:15.523111  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:15.523117  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:15.523123  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:15.523133  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:15.523139  787862 round_trippers.go:580]     Content-Length: 261
	I0116 03:43:15.523175  787862 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"48e4fbbd-3af5-4e91-879e-1d8a2666494a","resourceVersion":"335","creationTimestamp":"2024-01-16T03:42:41Z"}}]}
	I0116 03:43:15.523378  787862 default_sa.go:45] found service account: "default"
	I0116 03:43:15.523399  787862 default_sa.go:55] duration metric: took 196.206224ms for default service account to be created ...
	I0116 03:43:15.523407  787862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:43:15.720768  787862 request.go:629] Waited for 197.290483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:43:15.720846  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:43:15.720872  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:15.720886  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:15.720894  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:15.724307  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:15.724338  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:15.724348  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:15.724354  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:15.724360  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:15.724367  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:15.724377  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:15 GMT
	I0116 03:43:15.724383  787862 round_trippers.go:580]     Audit-Id: 7a5f59d4-5dc4-4ec9-aa2e-2e717f9ba070
	I0116 03:43:15.724824  787862 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2z5xs","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3","resourceVersion":"444","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c85c0644-575e-40d8-9912-1bb96f25128f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85c0644-575e-40d8-9912-1bb96f25128f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0116 03:43:15.727218  787862 system_pods.go:86] 8 kube-system pods found
	I0116 03:43:15.727244  787862 system_pods.go:89] "coredns-5dd5756b68-2z5xs" [ed28cba5-d03c-4872-8d43-ac2b9cbde1c3] Running
	I0116 03:43:15.727251  787862 system_pods.go:89] "etcd-multinode-741097" [e88b8e2a-3aa3-4ddc-93aa-e8119b68034e] Running
	I0116 03:43:15.727256  787862 system_pods.go:89] "kindnet-g8srb" [f8484e68-06a7-4a2b-868a-d81bd13a3656] Running
	I0116 03:43:15.727261  787862 system_pods.go:89] "kube-apiserver-multinode-741097" [15831bc8-c5f4-4288-adf2-c5af42d05ebb] Running
	I0116 03:43:15.727266  787862 system_pods.go:89] "kube-controller-manager-multinode-741097" [848561d9-4f18-415a-afb3-a1697ab9738a] Running
	I0116 03:43:15.727271  787862 system_pods.go:89] "kube-proxy-cm64c" [07b12aa4-20cf-4db6-8c2b-80085bc219a5] Running
	I0116 03:43:15.727278  787862 system_pods.go:89] "kube-scheduler-multinode-741097" [1cd5ce84-044b-4867-be0b-45f71f0946b9] Running
	I0116 03:43:15.727288  787862 system_pods.go:89] "storage-provisioner" [a6a472e5-20d5-4ad7-8c69-cfdffeda3c59] Running
	I0116 03:43:15.727296  787862 system_pods.go:126] duration metric: took 203.87636ms to wait for k8s-apps to be running ...
	I0116 03:43:15.727309  787862 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:43:15.727364  787862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:43:15.740536  787862 system_svc.go:56] duration metric: took 13.217007ms WaitForService to wait for kubelet.
	I0116 03:43:15.740597  787862 kubeadm.go:581] duration metric: took 33.69137556s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:43:15.740623  787862 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:43:15.921120  787862 request.go:629] Waited for 180.429544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0116 03:43:15.921192  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0116 03:43:15.921203  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:15.921212  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:15.921220  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:15.923669  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:15.923731  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:15.923749  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:15 GMT
	I0116 03:43:15.923764  787862 round_trippers.go:580]     Audit-Id: 0621e3c8-7134-4c08-ad17-9c12a50daf44
	I0116 03:43:15.923771  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:15.923777  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:15.923783  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:15.923805  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:15.923934  787862 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0116 03:43:15.924415  787862 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 03:43:15.924447  787862 node_conditions.go:123] node cpu capacity is 2
	I0116 03:43:15.924458  787862 node_conditions.go:105] duration metric: took 183.830652ms to run NodePressure ...
	I0116 03:43:15.924474  787862 start.go:228] waiting for startup goroutines ...
	I0116 03:43:15.924482  787862 start.go:233] waiting for cluster config update ...
	I0116 03:43:15.924493  787862 start.go:242] writing updated cluster config ...
	I0116 03:43:15.927001  787862 out.go:177] 
	I0116 03:43:15.928853  787862 config.go:182] Loaded profile config "multinode-741097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:43:15.928947  787862 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/config.json ...
	I0116 03:43:15.931300  787862 out.go:177] * Starting worker node multinode-741097-m02 in cluster multinode-741097
	I0116 03:43:15.933513  787862 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 03:43:15.935443  787862 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 03:43:15.937391  787862 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:43:15.937417  787862 cache.go:56] Caching tarball of preloaded images
	I0116 03:43:15.937460  787862 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 03:43:15.937530  787862 preload.go:174] Found /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0116 03:43:15.937546  787862 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 03:43:15.937636  787862 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/config.json ...
	I0116 03:43:15.955227  787862 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 03:43:15.955252  787862 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 03:43:15.955276  787862 cache.go:194] Successfully downloaded all kic artifacts
	I0116 03:43:15.955318  787862 start.go:365] acquiring machines lock for multinode-741097-m02: {Name:mk65d2b045e0087b5213caadc486157fecbfc381 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:43:15.955439  787862 start.go:369] acquired machines lock for "multinode-741097-m02" in 103.599µs
	I0116 03:43:15.955466  787862 start.go:93] Provisioning new machine with config: &{Name:multinode-741097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-741097 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 03:43:15.955543  787862 start.go:125] createHost starting for "m02" (driver="docker")
	I0116 03:43:15.959153  787862 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0116 03:43:15.959262  787862 start.go:159] libmachine.API.Create for "multinode-741097" (driver="docker")
	I0116 03:43:15.959284  787862 client.go:168] LocalClient.Create starting
	I0116 03:43:15.959340  787862 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem
	I0116 03:43:15.959376  787862 main.go:141] libmachine: Decoding PEM data...
	I0116 03:43:15.959395  787862 main.go:141] libmachine: Parsing certificate...
	I0116 03:43:15.959453  787862 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem
	I0116 03:43:15.959480  787862 main.go:141] libmachine: Decoding PEM data...
	I0116 03:43:15.959495  787862 main.go:141] libmachine: Parsing certificate...
	I0116 03:43:15.959734  787862 cli_runner.go:164] Run: docker network inspect multinode-741097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:43:15.976869  787862 network_create.go:77] Found existing network {name:multinode-741097 subnet:0x4002bc2150 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0116 03:43:15.976908  787862 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-741097-m02" container
	I0116 03:43:15.976983  787862 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 03:43:15.994983  787862 cli_runner.go:164] Run: docker volume create multinode-741097-m02 --label name.minikube.sigs.k8s.io=multinode-741097-m02 --label created_by.minikube.sigs.k8s.io=true
	I0116 03:43:16.013516  787862 oci.go:103] Successfully created a docker volume multinode-741097-m02
	I0116 03:43:16.013606  787862 cli_runner.go:164] Run: docker run --rm --name multinode-741097-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-741097-m02 --entrypoint /usr/bin/test -v multinode-741097-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 03:43:16.561499  787862 oci.go:107] Successfully prepared a docker volume multinode-741097-m02
	I0116 03:43:16.561538  787862 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:43:16.561560  787862 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 03:43:16.561652  787862 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-741097-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 03:43:20.864775  787862 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-741097-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.30307911s)
	I0116 03:43:20.864814  787862 kic.go:203] duration metric: took 4.303251 seconds to extract preloaded images to volume
	W0116 03:43:20.864964  787862 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 03:43:20.865085  787862 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 03:43:20.938266  787862 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-741097-m02 --name multinode-741097-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-741097-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-741097-m02 --network multinode-741097 --ip 192.168.58.3 --volume multinode-741097-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 03:43:21.303522  787862 cli_runner.go:164] Run: docker container inspect multinode-741097-m02 --format={{.State.Running}}
	I0116 03:43:21.331464  787862 cli_runner.go:164] Run: docker container inspect multinode-741097-m02 --format={{.State.Status}}
	I0116 03:43:21.358035  787862 cli_runner.go:164] Run: docker exec multinode-741097-m02 stat /var/lib/dpkg/alternatives/iptables
	I0116 03:43:21.425692  787862 oci.go:144] the created container "multinode-741097-m02" has a running status.
	I0116 03:43:21.425727  787862 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097-m02/id_rsa...
	I0116 03:43:22.028365  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 03:43:22.028468  787862 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 03:43:22.066371  787862 cli_runner.go:164] Run: docker container inspect multinode-741097-m02 --format={{.State.Status}}
	I0116 03:43:22.100608  787862 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 03:43:22.100627  787862 kic_runner.go:114] Args: [docker exec --privileged multinode-741097-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 03:43:22.191292  787862 cli_runner.go:164] Run: docker container inspect multinode-741097-m02 --format={{.State.Status}}
	I0116 03:43:22.218612  787862 machine.go:88] provisioning docker machine ...
	I0116 03:43:22.218639  787862 ubuntu.go:169] provisioning hostname "multinode-741097-m02"
	I0116 03:43:22.218701  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097-m02
	I0116 03:43:22.246577  787862 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:22.246996  787862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I0116 03:43:22.247008  787862 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-741097-m02 && echo "multinode-741097-m02" | sudo tee /etc/hostname
	I0116 03:43:22.442800  787862 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-741097-m02
	
	I0116 03:43:22.442904  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097-m02
	I0116 03:43:22.462954  787862 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:22.463468  787862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I0116 03:43:22.463497  787862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-741097-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-741097-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-741097-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:43:22.605538  787862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:43:22.605621  787862 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17967-719286/.minikube CaCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17967-719286/.minikube}
	I0116 03:43:22.605665  787862 ubuntu.go:177] setting up certificates
	I0116 03:43:22.605706  787862 provision.go:83] configureAuth start
	I0116 03:43:22.605804  787862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-741097-m02
	I0116 03:43:22.629715  787862 provision.go:138] copyHostCerts
	I0116 03:43:22.629754  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem
	I0116 03:43:22.629785  787862 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem, removing ...
	I0116 03:43:22.629793  787862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem
	I0116 03:43:22.629874  787862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/cert.pem (1123 bytes)
	I0116 03:43:22.629947  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem
	I0116 03:43:22.629963  787862 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem, removing ...
	I0116 03:43:22.629967  787862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem
	I0116 03:43:22.629991  787862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/key.pem (1675 bytes)
	I0116 03:43:22.630028  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem
	I0116 03:43:22.630053  787862 exec_runner.go:144] found /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem, removing ...
	I0116 03:43:22.630057  787862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem
	I0116 03:43:22.630080  787862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17967-719286/.minikube/ca.pem (1082 bytes)
	I0116 03:43:22.630124  787862 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem org=jenkins.multinode-741097-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-741097-m02]
	I0116 03:43:23.138366  787862 provision.go:172] copyRemoteCerts
	I0116 03:43:23.138432  787862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:43:23.138479  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097-m02
	I0116 03:43:23.161082  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097-m02/id_rsa Username:docker}
	I0116 03:43:23.258688  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 03:43:23.258749  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:43:23.286513  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 03:43:23.286571  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 03:43:23.313453  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 03:43:23.313524  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:43:23.341739  787862 provision.go:86] duration metric: configureAuth took 736.003583ms
	I0116 03:43:23.341781  787862 ubuntu.go:193] setting minikube options for container-runtime
	I0116 03:43:23.341971  787862 config.go:182] Loaded profile config "multinode-741097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:43:23.342076  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097-m02
	I0116 03:43:23.360151  787862 main.go:141] libmachine: Using SSH client type: native
	I0116 03:43:23.360578  787862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I0116 03:43:23.360599  787862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 03:43:23.615314  787862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 03:43:23.615373  787862 machine.go:91] provisioned docker machine in 1.396743416s
	I0116 03:43:23.615416  787862 client.go:171] LocalClient.Create took 7.656125729s
	I0116 03:43:23.615468  787862 start.go:167] duration metric: libmachine.API.Create for "multinode-741097" took 7.656205696s
	I0116 03:43:23.615494  787862 start.go:300] post-start starting for "multinode-741097-m02" (driver="docker")
	I0116 03:43:23.615531  787862 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:43:23.615639  787862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:43:23.615709  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097-m02
	I0116 03:43:23.635340  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097-m02/id_rsa Username:docker}
	I0116 03:43:23.735499  787862 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:43:23.739497  787862 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0116 03:43:23.739516  787862 command_runner.go:130] > NAME="Ubuntu"
	I0116 03:43:23.739524  787862 command_runner.go:130] > VERSION_ID="22.04"
	I0116 03:43:23.739539  787862 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0116 03:43:23.739546  787862 command_runner.go:130] > VERSION_CODENAME=jammy
	I0116 03:43:23.739553  787862 command_runner.go:130] > ID=ubuntu
	I0116 03:43:23.739558  787862 command_runner.go:130] > ID_LIKE=debian
	I0116 03:43:23.739574  787862 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0116 03:43:23.739580  787862 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0116 03:43:23.739591  787862 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0116 03:43:23.739600  787862 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0116 03:43:23.739605  787862 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0116 03:43:23.739656  787862 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 03:43:23.739686  787862 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 03:43:23.739701  787862 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 03:43:23.739708  787862 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 03:43:23.739718  787862 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-719286/.minikube/addons for local assets ...
	I0116 03:43:23.739773  787862 filesync.go:126] Scanning /home/jenkins/minikube-integration/17967-719286/.minikube/files for local assets ...
	I0116 03:43:23.739854  787862 filesync.go:149] local asset: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem -> 7246212.pem in /etc/ssl/certs
	I0116 03:43:23.739870  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem -> /etc/ssl/certs/7246212.pem
	I0116 03:43:23.739963  787862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:43:23.750154  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem --> /etc/ssl/certs/7246212.pem (1708 bytes)
	I0116 03:43:23.778801  787862 start.go:303] post-start completed in 163.28035ms
	I0116 03:43:23.779146  787862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-741097-m02
	I0116 03:43:23.801180  787862 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/config.json ...
	I0116 03:43:23.801461  787862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:43:23.801516  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097-m02
	I0116 03:43:23.819732  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097-m02/id_rsa Username:docker}
	I0116 03:43:23.914098  787862 command_runner.go:130] > 15%!
	(MISSING)I0116 03:43:23.914210  787862 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 03:43:23.919152  787862 command_runner.go:130] > 166G
	I0116 03:43:23.919574  787862 start.go:128] duration metric: createHost completed in 7.964017245s
	I0116 03:43:23.919592  787862 start.go:83] releasing machines lock for "multinode-741097-m02", held for 7.964143844s
	I0116 03:43:23.919658  787862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-741097-m02
	I0116 03:43:23.940220  787862 out.go:177] * Found network options:
	I0116 03:43:23.942094  787862 out.go:177]   - NO_PROXY=192.168.58.2
	W0116 03:43:23.943741  787862 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 03:43:23.943783  787862 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 03:43:23.943847  787862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 03:43:23.943893  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097-m02
	I0116 03:43:23.944143  787862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:43:23.944227  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097-m02
	I0116 03:43:23.965223  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097-m02/id_rsa Username:docker}
	I0116 03:43:23.976029  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097-m02/id_rsa Username:docker}
	I0116 03:43:24.215229  787862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:43:24.230237  787862 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 03:43:24.233598  787862 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0116 03:43:24.233619  787862 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0116 03:43:24.233629  787862 command_runner.go:130] > Device: b3h/179d	Inode: 1304622     Links: 1
	I0116 03:43:24.233637  787862 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:43:24.233661  787862 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0116 03:43:24.233673  787862 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0116 03:43:24.233680  787862 command_runner.go:130] > Change: 2024-01-16 03:20:35.860569315 +0000
	I0116 03:43:24.233687  787862 command_runner.go:130] >  Birth: 2024-01-16 03:20:35.860569315 +0000
	I0116 03:43:24.233761  787862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:24.257636  787862 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 03:43:24.257716  787862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:43:24.295576  787862 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0116 03:43:24.295628  787862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 03:43:24.295637  787862 start.go:475] detecting cgroup driver to use...
	I0116 03:43:24.295670  787862 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 03:43:24.295732  787862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:43:24.315071  787862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:43:24.328936  787862 docker.go:217] disabling cri-docker service (if available) ...
	I0116 03:43:24.329001  787862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 03:43:24.344355  787862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 03:43:24.362021  787862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 03:43:24.462918  787862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 03:43:24.564184  787862 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 03:43:24.564221  787862 docker.go:233] disabling docker service ...
	I0116 03:43:24.564285  787862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 03:43:24.586089  787862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 03:43:24.603545  787862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 03:43:24.701016  787862 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 03:43:24.701108  787862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 03:43:24.810060  787862 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 03:43:24.810175  787862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 03:43:24.824021  787862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:43:24.842095  787862 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 03:43:24.843326  787862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 03:43:24.843391  787862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:24.854700  787862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 03:43:24.854778  787862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:24.867512  787862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:24.879814  787862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 03:43:24.890970  787862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:43:24.902253  787862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:43:24.911718  787862 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 03:43:24.912863  787862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:43:24.922832  787862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:43:25.021229  787862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 03:43:25.145080  787862 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 03:43:25.145218  787862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 03:43:25.150594  787862 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 03:43:25.150616  787862 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 03:43:25.150629  787862 command_runner.go:130] > Device: bch/188d	Inode: 186         Links: 1
	I0116 03:43:25.150638  787862 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:43:25.150644  787862 command_runner.go:130] > Access: 2024-01-16 03:43:25.131310334 +0000
	I0116 03:43:25.150651  787862 command_runner.go:130] > Modify: 2024-01-16 03:43:25.131310334 +0000
	I0116 03:43:25.150658  787862 command_runner.go:130] > Change: 2024-01-16 03:43:25.131310334 +0000
	I0116 03:43:25.150662  787862 command_runner.go:130] >  Birth: -
	I0116 03:43:25.150767  787862 start.go:543] Will wait 60s for crictl version
	I0116 03:43:25.150855  787862 ssh_runner.go:195] Run: which crictl
	I0116 03:43:25.155504  787862 command_runner.go:130] > /usr/bin/crictl
	I0116 03:43:25.155818  787862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:43:25.196011  787862 command_runner.go:130] > Version:  0.1.0
	I0116 03:43:25.196097  787862 command_runner.go:130] > RuntimeName:  cri-o
	I0116 03:43:25.196119  787862 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0116 03:43:25.196141  787862 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 03:43:25.198488  787862 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 03:43:25.198632  787862 ssh_runner.go:195] Run: crio --version
	I0116 03:43:25.243140  787862 command_runner.go:130] > crio version 1.24.6
	I0116 03:43:25.243215  787862 command_runner.go:130] > Version:          1.24.6
	I0116 03:43:25.243237  787862 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 03:43:25.243255  787862 command_runner.go:130] > GitTreeState:     clean
	I0116 03:43:25.243289  787862 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 03:43:25.243312  787862 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 03:43:25.243328  787862 command_runner.go:130] > Compiler:         gc
	I0116 03:43:25.243345  787862 command_runner.go:130] > Platform:         linux/arm64
	I0116 03:43:25.243374  787862 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:43:25.243402  787862 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:43:25.243420  787862 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:43:25.243449  787862 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:43:25.245682  787862 ssh_runner.go:195] Run: crio --version
	I0116 03:43:25.291225  787862 command_runner.go:130] > crio version 1.24.6
	I0116 03:43:25.291282  787862 command_runner.go:130] > Version:          1.24.6
	I0116 03:43:25.291312  787862 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 03:43:25.291331  787862 command_runner.go:130] > GitTreeState:     clean
	I0116 03:43:25.291351  787862 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 03:43:25.291385  787862 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 03:43:25.291404  787862 command_runner.go:130] > Compiler:         gc
	I0116 03:43:25.291424  787862 command_runner.go:130] > Platform:         linux/arm64
	I0116 03:43:25.291441  787862 command_runner.go:130] > Linkmode:         dynamic
	I0116 03:43:25.291470  787862 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 03:43:25.291488  787862 command_runner.go:130] > SeccompEnabled:   true
	I0116 03:43:25.291506  787862 command_runner.go:130] > AppArmorEnabled:  false
	I0116 03:43:25.295743  787862 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0116 03:43:25.297784  787862 out.go:177]   - env NO_PROXY=192.168.58.2
	I0116 03:43:25.299815  787862 cli_runner.go:164] Run: docker network inspect multinode-741097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 03:43:25.317679  787862 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0116 03:43:25.322351  787862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:25.335438  787862 certs.go:56] Setting up /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097 for IP: 192.168.58.3
	I0116 03:43:25.335469  787862 certs.go:190] acquiring lock for shared ca certs: {Name:mkc1cd6c1048e37282c341d17731487c267a60dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:43:25.335599  787862 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key
	I0116 03:43:25.335638  787862 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key
	I0116 03:43:25.335648  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:43:25.335661  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:43:25.335672  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:43:25.335682  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:43:25.335731  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621.pem (1338 bytes)
	W0116 03:43:25.335759  787862 certs.go:433] ignoring /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621_empty.pem, impossibly tiny 0 bytes
	I0116 03:43:25.335768  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca-key.pem (1679 bytes)
	I0116 03:43:25.335796  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/ca.pem (1082 bytes)
	I0116 03:43:25.335821  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/cert.pem (1123 bytes)
	I0116 03:43:25.335844  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/home/jenkins/minikube-integration/17967-719286/.minikube/certs/key.pem (1675 bytes)
	I0116 03:43:25.335887  787862 certs.go:437] found cert: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem (1708 bytes)
	I0116 03:43:25.335916  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621.pem -> /usr/share/ca-certificates/724621.pem
	I0116 03:43:25.335928  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem -> /usr/share/ca-certificates/7246212.pem
	I0116 03:43:25.335940  787862 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:25.336398  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:43:25.365391  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 03:43:25.393810  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:43:25.426516  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 03:43:25.455375  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/certs/724621.pem --> /usr/share/ca-certificates/724621.pem (1338 bytes)
	I0116 03:43:25.483247  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/ssl/certs/7246212.pem --> /usr/share/ca-certificates/7246212.pem (1708 bytes)
	I0116 03:43:25.510942  787862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:43:25.537860  787862 ssh_runner.go:195] Run: openssl version
	I0116 03:43:25.544392  787862 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0116 03:43:25.544777  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7246212.pem && ln -fs /usr/share/ca-certificates/7246212.pem /etc/ssl/certs/7246212.pem"
	I0116 03:43:25.555653  787862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7246212.pem
	I0116 03:43:25.559738  787862 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 03:27 /usr/share/ca-certificates/7246212.pem
	I0116 03:43:25.560027  787862 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 03:27 /usr/share/ca-certificates/7246212.pem
	I0116 03:43:25.560169  787862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7246212.pem
	I0116 03:43:25.568558  787862 command_runner.go:130] > 3ec20f2e
	I0116 03:43:25.568637  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7246212.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:43:25.579514  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:43:25.590703  787862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:25.596671  787862 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 03:21 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:25.597038  787862 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 03:21 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:25.597097  787862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:43:25.605855  787862 command_runner.go:130] > b5213941
	I0116 03:43:25.606267  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:43:25.617576  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/724621.pem && ln -fs /usr/share/ca-certificates/724621.pem /etc/ssl/certs/724621.pem"
	I0116 03:43:25.628728  787862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/724621.pem
	I0116 03:43:25.633115  787862 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 03:27 /usr/share/ca-certificates/724621.pem
	I0116 03:43:25.633387  787862 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 03:27 /usr/share/ca-certificates/724621.pem
	I0116 03:43:25.633467  787862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/724621.pem
	I0116 03:43:25.641889  787862 command_runner.go:130] > 51391683
	I0116 03:43:25.642018  787862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/724621.pem /etc/ssl/certs/51391683.0"
	I0116 03:43:25.653325  787862 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:43:25.657735  787862 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:43:25.657791  787862 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:43:25.657900  787862 ssh_runner.go:195] Run: crio config
	I0116 03:43:25.706561  787862 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 03:43:25.706587  787862 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 03:43:25.706596  787862 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 03:43:25.706601  787862 command_runner.go:130] > #
	I0116 03:43:25.706609  787862 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 03:43:25.706617  787862 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 03:43:25.706625  787862 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 03:43:25.706638  787862 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 03:43:25.706650  787862 command_runner.go:130] > # reload'.
	I0116 03:43:25.706658  787862 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 03:43:25.706666  787862 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 03:43:25.706677  787862 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 03:43:25.706685  787862 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 03:43:25.706692  787862 command_runner.go:130] > [crio]
	I0116 03:43:25.706699  787862 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 03:43:25.706710  787862 command_runner.go:130] > # containers images, in this directory.
	I0116 03:43:25.706719  787862 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0116 03:43:25.706730  787862 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 03:43:25.706738  787862 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0116 03:43:25.706746  787862 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 03:43:25.706757  787862 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 03:43:25.706762  787862 command_runner.go:130] > # storage_driver = "vfs"
	I0116 03:43:25.706772  787862 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 03:43:25.706782  787862 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 03:43:25.706787  787862 command_runner.go:130] > # storage_option = [
	I0116 03:43:25.706794  787862 command_runner.go:130] > # ]
	I0116 03:43:25.706802  787862 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 03:43:25.706810  787862 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 03:43:25.706818  787862 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 03:43:25.706828  787862 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 03:43:25.706836  787862 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 03:43:25.706844  787862 command_runner.go:130] > # always happen on a node reboot
	I0116 03:43:25.706850  787862 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 03:43:25.706857  787862 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 03:43:25.706866  787862 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 03:43:25.706876  787862 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 03:43:25.706885  787862 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 03:43:25.706895  787862 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 03:43:25.706905  787862 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 03:43:25.706913  787862 command_runner.go:130] > # internal_wipe = true
	I0116 03:43:25.706920  787862 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 03:43:25.706927  787862 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 03:43:25.706936  787862 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 03:43:25.707134  787862 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 03:43:25.707148  787862 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 03:43:25.707154  787862 command_runner.go:130] > [crio.api]
	I0116 03:43:25.707160  787862 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 03:43:25.707170  787862 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 03:43:25.707179  787862 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 03:43:25.707185  787862 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 03:43:25.707198  787862 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 03:43:25.707206  787862 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 03:43:25.707214  787862 command_runner.go:130] > # stream_port = "0"
	I0116 03:43:25.707222  787862 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 03:43:25.707229  787862 command_runner.go:130] > # stream_enable_tls = false
	I0116 03:43:25.707237  787862 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 03:43:25.707242  787862 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 03:43:25.707253  787862 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 03:43:25.707262  787862 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 03:43:25.707272  787862 command_runner.go:130] > # minutes.
	I0116 03:43:25.707482  787862 command_runner.go:130] > # stream_tls_cert = ""
	I0116 03:43:25.707501  787862 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 03:43:25.707510  787862 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 03:43:25.707518  787862 command_runner.go:130] > # stream_tls_key = ""
	I0116 03:43:25.707528  787862 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 03:43:25.707538  787862 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 03:43:25.707545  787862 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 03:43:25.707553  787862 command_runner.go:130] > # stream_tls_ca = ""
	I0116 03:43:25.707562  787862 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:43:25.707571  787862 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0116 03:43:25.707580  787862 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 03:43:25.707585  787862 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0116 03:43:25.707600  787862 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 03:43:25.707609  787862 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 03:43:25.707614  787862 command_runner.go:130] > [crio.runtime]
	I0116 03:43:25.707624  787862 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 03:43:25.707631  787862 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 03:43:25.707638  787862 command_runner.go:130] > # "nofile=1024:2048"
	I0116 03:43:25.707649  787862 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 03:43:25.707657  787862 command_runner.go:130] > # default_ulimits = [
	I0116 03:43:25.707849  787862 command_runner.go:130] > # ]
	I0116 03:43:25.707864  787862 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 03:43:25.707870  787862 command_runner.go:130] > # no_pivot = false
	I0116 03:43:25.707877  787862 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 03:43:25.707889  787862 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 03:43:25.707897  787862 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 03:43:25.707905  787862 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 03:43:25.707914  787862 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 03:43:25.707923  787862 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:43:25.707928  787862 command_runner.go:130] > # conmon = ""
	I0116 03:43:25.707939  787862 command_runner.go:130] > # Cgroup setting for conmon
	I0116 03:43:25.707948  787862 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 03:43:25.707957  787862 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 03:43:25.707965  787862 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 03:43:25.707972  787862 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 03:43:25.707983  787862 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 03:43:25.707990  787862 command_runner.go:130] > # conmon_env = [
	I0116 03:43:25.708159  787862 command_runner.go:130] > # ]
	I0116 03:43:25.708175  787862 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 03:43:25.708183  787862 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 03:43:25.708190  787862 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 03:43:25.708199  787862 command_runner.go:130] > # default_env = [
	I0116 03:43:25.708283  787862 command_runner.go:130] > # ]
	I0116 03:43:25.708294  787862 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 03:43:25.708300  787862 command_runner.go:130] > # selinux = false
	I0116 03:43:25.708310  787862 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 03:43:25.708323  787862 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 03:43:25.708345  787862 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 03:43:25.708351  787862 command_runner.go:130] > # seccomp_profile = ""
	I0116 03:43:25.708360  787862 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 03:43:25.708368  787862 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 03:43:25.708376  787862 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 03:43:25.708387  787862 command_runner.go:130] > # which might increase security.
	I0116 03:43:25.708593  787862 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0116 03:43:25.708611  787862 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 03:43:25.708620  787862 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 03:43:25.708628  787862 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 03:43:25.708638  787862 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 03:43:25.708647  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:43:25.708654  787862 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 03:43:25.708663  787862 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 03:43:25.708671  787862 command_runner.go:130] > # the cgroup blockio controller.
	I0116 03:43:25.708679  787862 command_runner.go:130] > # blockio_config_file = ""
	I0116 03:43:25.708688  787862 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 03:43:25.708693  787862 command_runner.go:130] > # irqbalance daemon.
	I0116 03:43:25.709004  787862 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 03:43:25.709019  787862 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 03:43:25.709025  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:43:25.709031  787862 command_runner.go:130] > # rdt_config_file = ""
	I0116 03:43:25.709042  787862 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 03:43:25.709049  787862 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 03:43:25.709057  787862 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 03:43:25.709069  787862 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 03:43:25.709078  787862 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 03:43:25.709089  787862 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 03:43:25.709094  787862 command_runner.go:130] > # will be added.
	I0116 03:43:25.709100  787862 command_runner.go:130] > # default_capabilities = [
	I0116 03:43:25.709287  787862 command_runner.go:130] > # 	"CHOWN",
	I0116 03:43:25.709299  787862 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 03:43:25.709305  787862 command_runner.go:130] > # 	"FSETID",
	I0116 03:43:25.709375  787862 command_runner.go:130] > # 	"FOWNER",
	I0116 03:43:25.709385  787862 command_runner.go:130] > # 	"SETGID",
	I0116 03:43:25.709391  787862 command_runner.go:130] > # 	"SETUID",
	I0116 03:43:25.709396  787862 command_runner.go:130] > # 	"SETPCAP",
	I0116 03:43:25.709528  787862 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 03:43:25.709606  787862 command_runner.go:130] > # 	"KILL",
	I0116 03:43:25.709618  787862 command_runner.go:130] > # ]
	I0116 03:43:25.709629  787862 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0116 03:43:25.709638  787862 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0116 03:43:25.709797  787862 command_runner.go:130] > # add_inheritable_capabilities = true
	I0116 03:43:25.709813  787862 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 03:43:25.709823  787862 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:43:25.709829  787862 command_runner.go:130] > # default_sysctls = [
	I0116 03:43:25.709834  787862 command_runner.go:130] > # ]
	I0116 03:43:25.709844  787862 command_runner.go:130] > # List of devices on the host that a
	I0116 03:43:25.709854  787862 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 03:43:25.709859  787862 command_runner.go:130] > # allowed_devices = [
	I0116 03:43:25.709924  787862 command_runner.go:130] > # 	"/dev/fuse",
	I0116 03:43:25.710060  787862 command_runner.go:130] > # ]
	I0116 03:43:25.710074  787862 command_runner.go:130] > # List of additional devices. specified as
	I0116 03:43:25.710094  787862 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 03:43:25.710104  787862 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 03:43:25.710112  787862 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 03:43:25.710291  787862 command_runner.go:130] > # additional_devices = [
	I0116 03:43:25.710304  787862 command_runner.go:130] > # ]
	I0116 03:43:25.710312  787862 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 03:43:25.710317  787862 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 03:43:25.710322  787862 command_runner.go:130] > # 	"/etc/cdi",
	I0116 03:43:25.710327  787862 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 03:43:25.710470  787862 command_runner.go:130] > # ]
	I0116 03:43:25.710486  787862 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 03:43:25.710494  787862 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 03:43:25.710500  787862 command_runner.go:130] > # Defaults to false.
	I0116 03:43:25.710576  787862 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 03:43:25.710591  787862 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 03:43:25.710601  787862 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 03:43:25.710606  787862 command_runner.go:130] > # hooks_dir = [
	I0116 03:43:25.710789  787862 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 03:43:25.710801  787862 command_runner.go:130] > # ]
	I0116 03:43:25.710809  787862 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 03:43:25.710817  787862 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 03:43:25.710827  787862 command_runner.go:130] > # its default mounts from the following two files:
	I0116 03:43:25.710833  787862 command_runner.go:130] > #
	I0116 03:43:25.710841  787862 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 03:43:25.710849  787862 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 03:43:25.710859  787862 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 03:43:25.710864  787862 command_runner.go:130] > #
	I0116 03:43:25.710875  787862 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 03:43:25.710885  787862 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 03:43:25.710893  787862 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 03:43:25.710901  787862 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 03:43:25.710905  787862 command_runner.go:130] > #
	I0116 03:43:25.710919  787862 command_runner.go:130] > # default_mounts_file = ""
	I0116 03:43:25.710930  787862 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 03:43:25.710938  787862 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 03:43:25.710943  787862 command_runner.go:130] > # pids_limit = 0
	I0116 03:43:25.710953  787862 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 03:43:25.710963  787862 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 03:43:25.710971  787862 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 03:43:25.710983  787862 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 03:43:25.711158  787862 command_runner.go:130] > # log_size_max = -1
	I0116 03:43:25.711175  787862 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 03:43:25.711180  787862 command_runner.go:130] > # log_to_journald = false
	I0116 03:43:25.711190  787862 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 03:43:25.711198  787862 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 03:43:25.711208  787862 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 03:43:25.711218  787862 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 03:43:25.711225  787862 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 03:43:25.711409  787862 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 03:43:25.711422  787862 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 03:43:25.711427  787862 command_runner.go:130] > # read_only = false
	I0116 03:43:25.711438  787862 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 03:43:25.711451  787862 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 03:43:25.711457  787862 command_runner.go:130] > # live configuration reload.
	I0116 03:43:25.711465  787862 command_runner.go:130] > # log_level = "info"
	I0116 03:43:25.711472  787862 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 03:43:25.711478  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:43:25.711485  787862 command_runner.go:130] > # log_filter = ""
	I0116 03:43:25.711497  787862 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 03:43:25.711508  787862 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 03:43:25.711514  787862 command_runner.go:130] > # separated by comma.
	I0116 03:43:25.711705  787862 command_runner.go:130] > # uid_mappings = ""
	I0116 03:43:25.711719  787862 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 03:43:25.711727  787862 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 03:43:25.711733  787862 command_runner.go:130] > # separated by comma.
	I0116 03:43:25.711741  787862 command_runner.go:130] > # gid_mappings = ""
	I0116 03:43:25.711749  787862 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 03:43:25.711759  787862 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:43:25.711768  787862 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:43:25.711776  787862 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 03:43:25.711784  787862 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 03:43:25.711792  787862 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 03:43:25.711803  787862 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 03:43:25.711809  787862 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 03:43:25.711816  787862 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 03:43:25.711826  787862 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 03:43:25.711834  787862 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 03:43:25.711841  787862 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 03:43:25.711848  787862 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 03:43:25.711859  787862 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 03:43:25.711865  787862 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 03:43:25.711873  787862 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 03:43:25.712016  787862 command_runner.go:130] > # drop_infra_ctr = true
	I0116 03:43:25.712032  787862 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 03:43:25.712041  787862 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 03:43:25.712050  787862 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 03:43:25.712058  787862 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 03:43:25.712080  787862 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 03:43:25.712090  787862 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 03:43:25.712098  787862 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 03:43:25.712106  787862 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 03:43:25.712310  787862 command_runner.go:130] > # pinns_path = ""
	I0116 03:43:25.712326  787862 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 03:43:25.712335  787862 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 03:43:25.712343  787862 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 03:43:25.712353  787862 command_runner.go:130] > # default_runtime = "runc"
	I0116 03:43:25.712360  787862 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 03:43:25.712369  787862 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 03:43:25.712382  787862 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 03:43:25.712390  787862 command_runner.go:130] > # creation as a file is not desired either.
	I0116 03:43:25.712401  787862 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 03:43:25.712410  787862 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 03:43:25.712416  787862 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 03:43:25.712421  787862 command_runner.go:130] > # ]
	I0116 03:43:25.712440  787862 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 03:43:25.712449  787862 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 03:43:25.712459  787862 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 03:43:25.712467  787862 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 03:43:25.712471  787862 command_runner.go:130] > #
	I0116 03:43:25.712479  787862 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 03:43:25.712487  787862 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 03:43:25.712494  787862 command_runner.go:130] > #  runtime_type = "oci"
	I0116 03:43:25.712500  787862 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 03:43:25.712506  787862 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 03:43:25.712512  787862 command_runner.go:130] > #  allowed_annotations = []
	I0116 03:43:25.712519  787862 command_runner.go:130] > # Where:
	I0116 03:43:25.712526  787862 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 03:43:25.712536  787862 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 03:43:25.712546  787862 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 03:43:25.712554  787862 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 03:43:25.712559  787862 command_runner.go:130] > #   in $PATH.
	I0116 03:43:25.712570  787862 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 03:43:25.712576  787862 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 03:43:25.712587  787862 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 03:43:25.712592  787862 command_runner.go:130] > #   state.
	I0116 03:43:25.712599  787862 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 03:43:25.712607  787862 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 03:43:25.712619  787862 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 03:43:25.712626  787862 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 03:43:25.712637  787862 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 03:43:25.712645  787862 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 03:43:25.712655  787862 command_runner.go:130] > #   The currently recognized values are:
	I0116 03:43:25.712663  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 03:43:25.712674  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 03:43:25.712681  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 03:43:25.712689  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 03:43:25.712698  787862 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 03:43:25.712709  787862 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 03:43:25.712719  787862 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 03:43:25.712728  787862 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 03:43:25.712736  787862 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 03:43:25.712940  787862 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 03:43:25.712955  787862 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0116 03:43:25.712962  787862 command_runner.go:130] > runtime_type = "oci"
	I0116 03:43:25.712967  787862 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 03:43:25.712973  787862 command_runner.go:130] > runtime_config_path = ""
	I0116 03:43:25.712980  787862 command_runner.go:130] > monitor_path = ""
	I0116 03:43:25.712987  787862 command_runner.go:130] > monitor_cgroup = ""
	I0116 03:43:25.712993  787862 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 03:43:25.713010  787862 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 03:43:25.713019  787862 command_runner.go:130] > # running containers
	I0116 03:43:25.713024  787862 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 03:43:25.713032  787862 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 03:43:25.713043  787862 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 03:43:25.713050  787862 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 03:43:25.713057  787862 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 03:43:25.713065  787862 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 03:43:25.713072  787862 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 03:43:25.713080  787862 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 03:43:25.713086  787862 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 03:43:25.713094  787862 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 03:43:25.713102  787862 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 03:43:25.713111  787862 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 03:43:25.713121  787862 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 03:43:25.713131  787862 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 03:43:25.713143  787862 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 03:43:25.713150  787862 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 03:43:25.713161  787862 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 03:43:25.713174  787862 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 03:43:25.713182  787862 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 03:43:25.713194  787862 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 03:43:25.713199  787862 command_runner.go:130] > # Example:
	I0116 03:43:25.713207  787862 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 03:43:25.713213  787862 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 03:43:25.713221  787862 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 03:43:25.713228  787862 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 03:43:25.713233  787862 command_runner.go:130] > # cpuset = 0
	I0116 03:43:25.713240  787862 command_runner.go:130] > # cpushares = "0-1"
	I0116 03:43:25.713245  787862 command_runner.go:130] > # Where:
	I0116 03:43:25.713253  787862 command_runner.go:130] > # The workload name is workload-type.
	I0116 03:43:25.713264  787862 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 03:43:25.713271  787862 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 03:43:25.713278  787862 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 03:43:25.713291  787862 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 03:43:25.713299  787862 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 03:43:25.713456  787862 command_runner.go:130] > # 
	I0116 03:43:25.713471  787862 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 03:43:25.713476  787862 command_runner.go:130] > #
	I0116 03:43:25.713490  787862 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 03:43:25.713498  787862 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 03:43:25.713506  787862 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 03:43:25.713520  787862 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 03:43:25.713527  787862 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 03:43:25.713532  787862 command_runner.go:130] > [crio.image]
	I0116 03:43:25.713541  787862 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 03:43:25.713627  787862 command_runner.go:130] > # default_transport = "docker://"
	I0116 03:43:25.713642  787862 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 03:43:25.713651  787862 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:43:25.713656  787862 command_runner.go:130] > # global_auth_file = ""
	I0116 03:43:25.713665  787862 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 03:43:25.713672  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:43:25.713680  787862 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 03:43:25.713689  787862 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 03:43:25.713699  787862 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 03:43:25.713706  787862 command_runner.go:130] > # This option supports live configuration reload.
	I0116 03:43:25.713875  787862 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 03:43:25.713888  787862 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 03:43:25.713897  787862 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 03:43:25.713905  787862 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 03:43:25.713913  787862 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 03:43:25.713923  787862 command_runner.go:130] > # pause_command = "/pause"
	I0116 03:43:25.713931  787862 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 03:43:25.713939  787862 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 03:43:25.713950  787862 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 03:43:25.713974  787862 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 03:43:25.713984  787862 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 03:43:25.713989  787862 command_runner.go:130] > # signature_policy = ""
	I0116 03:43:25.713997  787862 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 03:43:25.714008  787862 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 03:43:25.714014  787862 command_runner.go:130] > # changing them here.
	I0116 03:43:25.714019  787862 command_runner.go:130] > # insecure_registries = [
	I0116 03:43:25.714026  787862 command_runner.go:130] > # ]
	I0116 03:43:25.714034  787862 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 03:43:25.714043  787862 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 03:43:25.714221  787862 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 03:43:25.714234  787862 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 03:43:25.714240  787862 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 03:43:25.714248  787862 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 03:43:25.714254  787862 command_runner.go:130] > # CNI plugins.
	I0116 03:43:25.714261  787862 command_runner.go:130] > [crio.network]
	I0116 03:43:25.714273  787862 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 03:43:25.714282  787862 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 03:43:25.714289  787862 command_runner.go:130] > # cni_default_network = ""
	I0116 03:43:25.714296  787862 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 03:43:25.714305  787862 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 03:43:25.714312  787862 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 03:43:25.714317  787862 command_runner.go:130] > # plugin_dirs = [
	I0116 03:43:25.714468  787862 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 03:43:25.714551  787862 command_runner.go:130] > # ]
	I0116 03:43:25.714566  787862 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 03:43:25.714571  787862 command_runner.go:130] > [crio.metrics]
	I0116 03:43:25.714578  787862 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 03:43:25.714583  787862 command_runner.go:130] > # enable_metrics = false
	I0116 03:43:25.714589  787862 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 03:43:25.714595  787862 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 03:43:25.714603  787862 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 03:43:25.714613  787862 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 03:43:25.714623  787862 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 03:43:25.714629  787862 command_runner.go:130] > # metrics_collectors = [
	I0116 03:43:25.714794  787862 command_runner.go:130] > # 	"operations",
	I0116 03:43:25.714806  787862 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 03:43:25.714813  787862 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 03:43:25.714818  787862 command_runner.go:130] > # 	"operations_errors",
	I0116 03:43:25.714826  787862 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 03:43:25.714832  787862 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 03:43:25.714838  787862 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 03:43:25.715021  787862 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 03:43:25.715033  787862 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 03:43:25.715039  787862 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 03:43:25.715045  787862 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 03:43:25.715050  787862 command_runner.go:130] > # 	"containers_oom_total",
	I0116 03:43:25.715060  787862 command_runner.go:130] > # 	"containers_oom",
	I0116 03:43:25.715067  787862 command_runner.go:130] > # 	"processes_defunct",
	I0116 03:43:25.715072  787862 command_runner.go:130] > # 	"operations_total",
	I0116 03:43:25.715080  787862 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 03:43:25.715240  787862 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 03:43:25.715254  787862 command_runner.go:130] > # 	"operations_errors_total",
	I0116 03:43:25.715260  787862 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 03:43:25.715266  787862 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 03:43:25.715320  787862 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 03:43:25.715334  787862 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 03:43:25.715340  787862 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 03:43:25.715512  787862 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 03:43:25.715691  787862 command_runner.go:130] > # ]
	I0116 03:43:25.715707  787862 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 03:43:25.715791  787862 command_runner.go:130] > # metrics_port = 9090
	I0116 03:43:25.715804  787862 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 03:43:25.715810  787862 command_runner.go:130] > # metrics_socket = ""
	I0116 03:43:25.715816  787862 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 03:43:25.715824  787862 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 03:43:25.715832  787862 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 03:43:25.715838  787862 command_runner.go:130] > # certificate on any modification event.
	I0116 03:43:25.715843  787862 command_runner.go:130] > # metrics_cert = ""
	I0116 03:43:25.715852  787862 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 03:43:25.715859  787862 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 03:43:25.716026  787862 command_runner.go:130] > # metrics_key = ""
	I0116 03:43:25.716041  787862 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 03:43:25.716047  787862 command_runner.go:130] > [crio.tracing]
	I0116 03:43:25.716054  787862 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 03:43:25.716071  787862 command_runner.go:130] > # enable_tracing = false
	I0116 03:43:25.716079  787862 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 03:43:25.716085  787862 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 03:43:25.716091  787862 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 03:43:25.716097  787862 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 03:43:25.716109  787862 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 03:43:25.716114  787862 command_runner.go:130] > [crio.stats]
	I0116 03:43:25.716121  787862 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 03:43:25.716130  787862 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 03:43:25.716362  787862 command_runner.go:130] > # stats_collection_period = 0
	I0116 03:43:25.718196  787862 command_runner.go:130] ! time="2024-01-16 03:43:25.703805481Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0116 03:43:25.718220  787862 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 03:43:25.718279  787862 cni.go:84] Creating CNI manager for ""
	I0116 03:43:25.718289  787862 cni.go:136] 2 nodes found, recommending kindnet
	I0116 03:43:25.718298  787862 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:43:25.718320  787862 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-741097 NodeName:multinode-741097-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:43:25.718442  787862 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-741097-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:43:25.718497  787862 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-741097-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-741097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:43:25.718562  787862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:43:25.728148  787862 command_runner.go:130] > kubeadm
	I0116 03:43:25.728167  787862 command_runner.go:130] > kubectl
	I0116 03:43:25.728172  787862 command_runner.go:130] > kubelet
	I0116 03:43:25.729251  787862 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:43:25.729321  787862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 03:43:25.739358  787862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 03:43:25.759981  787862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:43:25.780847  787862 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0116 03:43:25.785280  787862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:43:25.797966  787862 host.go:66] Checking if "multinode-741097" exists ...
	I0116 03:43:25.798251  787862 config.go:182] Loaded profile config "multinode-741097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:43:25.798500  787862 start.go:304] JoinCluster: &{Name:multinode-741097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-741097 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:43:25.798587  787862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 03:43:25.798645  787862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:43:25.818895  787862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa Username:docker}
	I0116 03:43:25.982039  787862 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token uzjzbg.jtu02jdm1xcwa4ks --discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 
	I0116 03:43:25.986129  787862 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 03:43:25.986168  787862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uzjzbg.jtu02jdm1xcwa4ks --discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-741097-m02"
	I0116 03:43:26.029776  787862 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 03:43:26.068643  787862 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0116 03:43:26.068671  787862 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0116 03:43:26.068678  787862 command_runner.go:130] > OS: Linux
	I0116 03:43:26.068685  787862 command_runner.go:130] > CGROUPS_CPU: enabled
	I0116 03:43:26.068693  787862 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0116 03:43:26.068700  787862 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0116 03:43:26.068707  787862 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0116 03:43:26.068717  787862 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0116 03:43:26.068724  787862 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0116 03:43:26.068737  787862 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0116 03:43:26.068745  787862 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0116 03:43:26.068755  787862 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0116 03:43:26.178739  787862 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 03:43:26.178768  787862 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 03:43:26.212725  787862 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:43:26.212782  787862 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:43:26.212976  787862 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 03:43:26.317264  787862 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 03:43:29.331056  787862 command_runner.go:130] > This node has joined the cluster:
	I0116 03:43:29.331082  787862 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 03:43:29.331091  787862 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 03:43:29.331100  787862 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 03:43:29.334503  787862 command_runner.go:130] ! W0116 03:43:26.029202    1025 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 03:43:29.334532  787862 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 03:43:29.334546  787862 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:43:29.334561  787862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uzjzbg.jtu02jdm1xcwa4ks --discovery-token-ca-cert-hash sha256:78b446be54113cf43e3853835de42782a6b98d45d441359ad299b10cb7c55484 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-741097-m02": (3.348378698s)
	I0116 03:43:29.334577  787862 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 03:43:29.561862  787862 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0116 03:43:29.561952  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-741097 minikube.k8s.io/updated_at=2024_01_16T03_43_29_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:43:29.680833  787862 command_runner.go:130] > node/multinode-741097-m02 labeled
	I0116 03:43:29.684190  787862 start.go:306] JoinCluster complete in 3.885684865s
	I0116 03:43:29.684215  787862 cni.go:84] Creating CNI manager for ""
	I0116 03:43:29.684222  787862 cni.go:136] 2 nodes found, recommending kindnet
	I0116 03:43:29.684277  787862 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:43:29.688700  787862 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 03:43:29.688727  787862 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0116 03:43:29.688739  787862 command_runner.go:130] > Device: 3ah/58d	Inode: 1308531     Links: 1
	I0116 03:43:29.688747  787862 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:43:29.688757  787862 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0116 03:43:29.688764  787862 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0116 03:43:29.688771  787862 command_runner.go:130] > Change: 2024-01-16 03:20:36.520574610 +0000
	I0116 03:43:29.688777  787862 command_runner.go:130] >  Birth: 2024-01-16 03:20:36.476574257 +0000
	I0116 03:43:29.688987  787862 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 03:43:29.688997  787862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:43:29.709486  787862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:43:30.080113  787862 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:43:30.087425  787862 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:43:30.091310  787862 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 03:43:30.107050  787862 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 03:43:30.113108  787862 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:43:30.113377  787862 kapi.go:59] client config for multinode-741097: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.key", CAFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:43:30.113716  787862 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:43:30.113730  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:30.113739  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:30.113747  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:30.116740  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:30.116775  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:30.116786  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:30.116793  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:30.116800  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:30.116806  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:30.116816  787862 round_trippers.go:580]     Content-Length: 291
	I0116 03:43:30.116822  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:30 GMT
	I0116 03:43:30.116829  787862 round_trippers.go:580]     Audit-Id: 496fafc5-1f37-4d63-9c80-01f5833c37ac
	I0116 03:43:30.116990  787862 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f3c5529d-e02d-46a5-965b-a2d49fe27004","resourceVersion":"448","creationTimestamp":"2024-01-16T03:42:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 03:43:30.117090  787862 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-741097" context rescaled to 1 replicas
	I0116 03:43:30.117123  787862 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 03:43:30.121425  787862 out.go:177] * Verifying Kubernetes components...
	I0116 03:43:30.123653  787862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:43:30.144453  787862 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:43:30.144731  787862 kapi.go:59] client config for multinode-741097: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.crt", KeyFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/profiles/multinode-741097/client.key", CAFile:"/home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:43:30.145009  787862 node_ready.go:35] waiting up to 6m0s for node "multinode-741097-m02" to be "Ready" ...
	I0116 03:43:30.145084  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:30.145095  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:30.145104  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:30.145115  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:30.148515  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:30.148581  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:30.148604  787862 round_trippers.go:580]     Audit-Id: 343b88a5-0186-4e4f-9883-a31ae4630b5f
	I0116 03:43:30.148622  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:30.148652  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:30.148677  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:30.148693  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:30.148713  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:30 GMT
	I0116 03:43:30.149117  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"488","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0116 03:43:30.645801  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:30.645823  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:30.645833  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:30.645845  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:30.648114  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:30.648134  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:30.648142  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:30 GMT
	I0116 03:43:30.648149  787862 round_trippers.go:580]     Audit-Id: f0ffe0f6-ed0a-4b70-bb9b-7459c48acdca
	I0116 03:43:30.648155  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:30.648161  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:30.648170  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:30.648182  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:30.648362  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"488","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0116 03:43:31.145209  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:31.145233  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:31.145243  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:31.145250  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:31.147642  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:31.147686  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:31.147720  787862 round_trippers.go:580]     Audit-Id: 307d8a8a-bcd4-4395-bea7-b69983790ed4
	I0116 03:43:31.147745  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:31.147764  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:31.147780  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:31.147815  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:31.147834  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:31 GMT
	I0116 03:43:31.148354  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"488","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I0116 03:43:31.645854  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:31.645879  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:31.645889  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:31.645896  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:31.648316  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:31.648335  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:31.648349  787862 round_trippers.go:580]     Audit-Id: 181aa6e4-9830-4534-8bdf-ad95b95ce611
	I0116 03:43:31.648356  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:31.648362  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:31.648372  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:31.648379  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:31.648389  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:31 GMT
	I0116 03:43:31.648762  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:32.145251  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:32.145273  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:32.145283  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:32.145290  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:32.147656  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:32.147677  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:32.147686  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:32.147712  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:32.147718  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:32.147729  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:32 GMT
	I0116 03:43:32.147736  787862 round_trippers.go:580]     Audit-Id: 47a926be-b03a-4f3c-b16b-820e97d76cdb
	I0116 03:43:32.147745  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:32.148252  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:32.148661  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:32.645907  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:32.645929  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:32.645939  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:32.645946  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:32.648358  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:32.648387  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:32.648395  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:32 GMT
	I0116 03:43:32.648401  787862 round_trippers.go:580]     Audit-Id: b45a7364-9e8a-40a7-8f68-323a4004d0c7
	I0116 03:43:32.648407  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:32.648413  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:32.648452  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:32.648465  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:32.648578  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:33.146018  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:33.146043  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:33.146053  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:33.146060  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:33.148504  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:33.148531  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:33.148541  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:33.148561  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:33.148597  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:33 GMT
	I0116 03:43:33.148610  787862 round_trippers.go:580]     Audit-Id: 5c9477b8-d0ba-4143-bfdf-6ad129231570
	I0116 03:43:33.148617  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:33.148625  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:33.148934  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:33.645312  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:33.645342  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:33.645352  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:33.645359  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:33.647711  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:33.647735  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:33.647743  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:33.647750  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:33.647756  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:33.647762  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:33.647768  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:33 GMT
	I0116 03:43:33.647776  787862 round_trippers.go:580]     Audit-Id: faac2473-cd14-4125-814d-77536328a2d9
	I0116 03:43:33.648049  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:34.145900  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:34.145922  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:34.145933  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:34.145941  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:34.148495  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:34.148515  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:34.148523  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:34.148529  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:34.148535  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:34 GMT
	I0116 03:43:34.148542  787862 round_trippers.go:580]     Audit-Id: 27c5b4f6-7bf8-4ac3-a6a1-21ec7f4dbb6f
	I0116 03:43:34.148548  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:34.148554  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:34.148722  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:34.149132  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:34.645237  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:34.645260  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:34.645269  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:34.645277  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:34.647600  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:34.647622  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:34.647630  787862 round_trippers.go:580]     Audit-Id: 496d63cd-b87d-4a4c-8526-eb0afb1542e9
	I0116 03:43:34.647636  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:34.647645  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:34.647651  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:34.647658  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:34.647667  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:34 GMT
	I0116 03:43:34.647773  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:35.145416  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:35.145441  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:35.145451  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:35.145459  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:35.158005  787862 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0116 03:43:35.158043  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:35.158053  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:35 GMT
	I0116 03:43:35.158060  787862 round_trippers.go:580]     Audit-Id: 6dbeebd2-9b53-4e6b-89c3-ce9b6ed9f046
	I0116 03:43:35.158066  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:35.158072  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:35.158083  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:35.158089  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:35.158207  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:35.645243  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:35.645266  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:35.645275  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:35.645282  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:35.647702  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:35.647722  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:35.647730  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:35.647737  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:35.647744  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:35.647751  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:35 GMT
	I0116 03:43:35.647757  787862 round_trippers.go:580]     Audit-Id: 2dbc978a-969b-4850-b7a7-22c1be1ccfd1
	I0116 03:43:35.647763  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:35.647877  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:36.145927  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:36.145972  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:36.145983  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:36.145990  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:36.148388  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:36.148410  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:36.148418  787862 round_trippers.go:580]     Audit-Id: a5d0a6a3-3ecb-42df-a14a-aeb7885dafed
	I0116 03:43:36.148431  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:36.148437  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:36.148444  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:36.148454  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:36.148460  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:36 GMT
	I0116 03:43:36.148587  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:36.645255  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:36.645278  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:36.645288  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:36.645295  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:36.647565  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:36.647587  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:36.647595  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:36.647601  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:36 GMT
	I0116 03:43:36.647607  787862 round_trippers.go:580]     Audit-Id: d6cd86a5-db15-48e3-9ecf-e29edb0ca5d4
	I0116 03:43:36.647613  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:36.647619  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:36.647626  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:36.647737  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:36.648144  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:37.145826  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:37.145849  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:37.145858  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:37.145871  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:37.149081  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:37.149101  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:37.149113  787862 round_trippers.go:580]     Audit-Id: 6b8fcbb9-44a4-4069-8864-5ecd557694cb
	I0116 03:43:37.149120  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:37.149128  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:37.149136  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:37.149145  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:37.149152  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:37 GMT
	I0116 03:43:37.149265  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:37.645303  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:37.645329  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:37.645339  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:37.645346  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:37.650521  787862 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:43:37.650552  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:37.650561  787862 round_trippers.go:580]     Audit-Id: bac95f07-3cb0-4d82-b047-63f5c770b91c
	I0116 03:43:37.650567  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:37.650573  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:37.650579  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:37.650586  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:37.650596  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:37 GMT
	I0116 03:43:37.651032  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:38.146155  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:38.146177  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:38.146188  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:38.146195  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:38.148582  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:38.148601  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:38.148609  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:38.148615  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:38.148622  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:38.148629  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:38.148635  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:38 GMT
	I0116 03:43:38.148642  787862 round_trippers.go:580]     Audit-Id: 57589450-7303-4bf9-be41-fc56db6e2ca5
	I0116 03:43:38.148943  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:38.645253  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:38.645276  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:38.645286  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:38.645293  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:38.647621  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:38.647643  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:38.647652  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:38 GMT
	I0116 03:43:38.647658  787862 round_trippers.go:580]     Audit-Id: a9663420-8e6a-4ed3-bc18-4ffc7b96b0c2
	I0116 03:43:38.647664  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:38.647677  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:38.647683  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:38.647693  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:38.647891  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:38.648296  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:39.145852  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:39.145874  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:39.145883  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:39.145890  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:39.149092  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:39.149112  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:39.149121  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:39.149128  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:39 GMT
	I0116 03:43:39.149134  787862 round_trippers.go:580]     Audit-Id: 4a7a8e17-33fd-4c9d-8278-11f1981f01e7
	I0116 03:43:39.149140  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:39.149147  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:39.149153  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:39.149354  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"501","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 03:43:39.646050  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:39.646075  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:39.646085  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:39.646092  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:39.648619  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:39.648645  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:39.648654  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:39.648664  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:39.648671  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:39 GMT
	I0116 03:43:39.648681  787862 round_trippers.go:580]     Audit-Id: 406133dc-5167-4d07-bf9f-0a44db0b42e6
	I0116 03:43:39.648691  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:39.648697  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:39.649191  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:40.145225  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:40.145247  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:40.145258  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:40.145265  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:40.147721  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:40.147741  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:40.147749  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:40 GMT
	I0116 03:43:40.147755  787862 round_trippers.go:580]     Audit-Id: 0ba4544c-26f8-43ab-82f3-200e652b02af
	I0116 03:43:40.147761  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:40.147767  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:40.147773  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:40.147780  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:40.148415  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:40.646060  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:40.646084  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:40.646093  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:40.646100  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:40.648535  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:40.648553  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:40.648560  787862 round_trippers.go:580]     Audit-Id: 7f0b2239-16c7-4d10-91bf-b271d4f50148
	I0116 03:43:40.648567  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:40.648573  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:40.648583  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:40.648594  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:40.648606  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:40 GMT
	I0116 03:43:40.648893  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:40.649291  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:41.146206  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:41.146229  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:41.146239  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:41.146246  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:41.149686  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:41.149705  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:41.149713  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:41 GMT
	I0116 03:43:41.149720  787862 round_trippers.go:580]     Audit-Id: 36ac9b8a-97aa-4005-a06d-b8aa56ab0f2b
	I0116 03:43:41.149726  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:41.149732  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:41.149738  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:41.149745  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:41.149882  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:41.645946  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:41.645971  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:41.645981  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:41.645989  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:41.648500  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:41.648525  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:41.648533  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:41.648540  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:41.648546  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:41.648552  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:41.648558  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:41 GMT
	I0116 03:43:41.648565  787862 round_trippers.go:580]     Audit-Id: 8116a71c-c2b5-43b0-ab5e-de593d8e8d5d
	I0116 03:43:41.648689  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:42.145880  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:42.145904  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:42.145914  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:42.145921  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:42.148726  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:42.148757  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:42.148766  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:42.148774  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:42.148780  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:42.148788  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:42.148795  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:42 GMT
	I0116 03:43:42.148806  787862 round_trippers.go:580]     Audit-Id: e53262f9-4202-45fc-a6a4-524ee68df95e
	I0116 03:43:42.148970  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:42.646057  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:42.646080  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:42.646089  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:42.646096  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:42.648490  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:42.648514  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:42.648522  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:42.648529  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:42 GMT
	I0116 03:43:42.648535  787862 round_trippers.go:580]     Audit-Id: 3111d872-049e-4a5e-bcc5-6812df21c9d6
	I0116 03:43:42.648541  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:42.648550  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:42.648556  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:42.648688  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:43.145848  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:43.145870  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:43.145879  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:43.145887  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:43.148517  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:43.148539  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:43.148548  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:43.148556  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:43 GMT
	I0116 03:43:43.148562  787862 round_trippers.go:580]     Audit-Id: 6e876fd6-b7fc-42d5-9a0b-df63914b12fa
	I0116 03:43:43.148572  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:43.148578  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:43.148588  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:43.148791  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:43.149206  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:43.645259  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:43.645281  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:43.645291  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:43.645298  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:43.647638  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:43.647659  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:43.647668  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:43.647674  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:43.647681  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:43.647687  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:43.647693  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:43 GMT
	I0116 03:43:43.647699  787862 round_trippers.go:580]     Audit-Id: e812b2b8-f50a-4e39-a3c5-a134d02939ab
	I0116 03:43:43.647814  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:44.145724  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:44.145749  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:44.145758  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:44.145766  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:44.149057  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:44.149082  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:44.149090  787862 round_trippers.go:580]     Audit-Id: 07b9e17c-d278-4723-bf62-559489d1d543
	I0116 03:43:44.149097  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:44.149103  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:44.149110  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:44.149116  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:44.149123  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:44 GMT
	I0116 03:43:44.149441  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:44.646139  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:44.646163  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:44.646173  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:44.646180  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:44.648542  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:44.648560  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:44.648568  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:44.648575  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:44.648581  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:44 GMT
	I0116 03:43:44.648587  787862 round_trippers.go:580]     Audit-Id: 65ee2b1f-920f-4dd3-9567-a704a5d34d55
	I0116 03:43:44.648593  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:44.648599  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:44.648724  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:45.145407  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:45.145428  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:45.145438  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:45.145450  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:45.148005  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:45.148028  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:45.148038  787862 round_trippers.go:580]     Audit-Id: 2ee5d4e5-1640-4bd0-b419-a3c1df8c827f
	I0116 03:43:45.148045  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:45.148051  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:45.148057  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:45.148088  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:45.148097  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:45 GMT
	I0116 03:43:45.148289  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:45.645730  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:45.645752  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:45.645761  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:45.645768  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:45.648046  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:45.648082  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:45.648091  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:45 GMT
	I0116 03:43:45.648097  787862 round_trippers.go:580]     Audit-Id: 0248e8cb-140a-45ba-8221-a6304ccc946d
	I0116 03:43:45.648104  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:45.648110  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:45.648116  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:45.648122  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:45.648511  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:45.648910  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:46.145645  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:46.145669  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:46.145679  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:46.145686  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:46.148169  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:46.148187  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:46.148195  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:46 GMT
	I0116 03:43:46.148202  787862 round_trippers.go:580]     Audit-Id: 1e209501-b828-4553-942e-d492fb41c541
	I0116 03:43:46.148208  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:46.148214  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:46.148220  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:46.148227  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:46.148440  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:46.645553  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:46.645576  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:46.645586  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:46.645593  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:46.648887  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:43:46.648913  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:46.648922  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:46.648928  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:46.648935  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:46.648942  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:46 GMT
	I0116 03:43:46.648950  787862 round_trippers.go:580]     Audit-Id: c7772154-d421-4e59-a416-c915b7fb679c
	I0116 03:43:46.648958  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:46.649067  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:47.146212  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:47.146237  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:47.146246  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:47.146254  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:47.148671  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:47.148701  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:47.148710  787862 round_trippers.go:580]     Audit-Id: 9a1abed5-0847-4c91-807f-5d6b5bf476a0
	I0116 03:43:47.148717  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:47.148724  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:47.148734  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:47.148744  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:47.148751  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:47 GMT
	I0116 03:43:47.148984  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:47.645477  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:47.645500  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:47.645510  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:47.645517  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:47.647871  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:47.647891  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:47.647899  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:47.647905  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:47.647912  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:47.647918  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:47.647925  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:47 GMT
	I0116 03:43:47.647932  787862 round_trippers.go:580]     Audit-Id: 7b8ddb93-84b7-40ee-9eae-f6f25e2d202d
	I0116 03:43:47.648132  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:48.145252  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:48.145275  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:48.145285  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:48.145291  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:48.147866  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:48.147890  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:48.147899  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:48.147906  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:48.147913  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:48.147919  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:48.147931  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:48 GMT
	I0116 03:43:48.147938  787862 round_trippers.go:580]     Audit-Id: 543ed79f-5be2-42a8-9756-133590b25403
	I0116 03:43:48.148144  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:48.148561  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:48.645231  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:48.645251  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:48.645261  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:48.645268  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:48.647673  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:48.647692  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:48.647700  787862 round_trippers.go:580]     Audit-Id: 851dfecd-78da-4f42-bf7e-60105c04733c
	I0116 03:43:48.647706  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:48.647712  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:48.647719  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:48.647725  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:48.647731  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:48 GMT
	I0116 03:43:48.647898  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:49.146031  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:49.146058  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:49.146069  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:49.146076  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:49.148534  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:49.148555  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:49.148564  787862 round_trippers.go:580]     Audit-Id: cf8c1db1-e14e-4d8c-b8f9-5dc86d4e1606
	I0116 03:43:49.148570  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:49.148577  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:49.148583  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:49.148589  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:49.148596  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:49 GMT
	I0116 03:43:49.148963  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:49.646048  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:49.646070  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:49.646080  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:49.646087  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:49.648473  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:49.648492  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:49.648502  787862 round_trippers.go:580]     Audit-Id: a22543dd-470a-45ee-8e6d-09f05b43a9c1
	I0116 03:43:49.648508  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:49.648514  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:49.648521  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:49.648530  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:49.648537  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:49 GMT
	I0116 03:43:49.648820  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:50.145825  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:50.145848  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:50.145859  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:50.145866  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:50.148411  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:50.148442  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:50.148451  787862 round_trippers.go:580]     Audit-Id: ea3e93de-009b-4cb3-8b24-6c1129df4b43
	I0116 03:43:50.148457  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:50.148464  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:50.148470  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:50.148477  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:50.148483  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:50 GMT
	I0116 03:43:50.148587  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:50.149003  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:50.646150  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:50.646172  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:50.646182  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:50.646190  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:50.648544  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:50.648565  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:50.648573  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:50 GMT
	I0116 03:43:50.648580  787862 round_trippers.go:580]     Audit-Id: 08c955b9-3c77-4f3e-ba2f-e6b4e1f95473
	I0116 03:43:50.648586  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:50.648592  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:50.648599  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:50.648609  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:50.648929  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:51.145290  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:51.145315  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:51.145326  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:51.145333  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:51.147720  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:51.147739  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:51.147747  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:51.147764  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:51.147770  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:51.147776  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:51.147783  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:51 GMT
	I0116 03:43:51.147789  787862 round_trippers.go:580]     Audit-Id: 85e19fc3-b182-4f06-b3df-08b4dec81efb
	I0116 03:43:51.147907  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:51.645920  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:51.645946  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:51.645957  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:51.645964  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:51.648324  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:51.648349  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:51.648358  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:51.648364  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:51.648371  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:51 GMT
	I0116 03:43:51.648377  787862 round_trippers.go:580]     Audit-Id: ea4dbe51-fafb-4ba8-9767-2a3fb21b6fd9
	I0116 03:43:51.648386  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:51.648399  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:51.648887  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:52.145477  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:52.145504  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:52.145514  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:52.145522  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:52.147962  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:52.147982  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:52.147991  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:52.147998  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:52.148004  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:52.148011  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:52.148017  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:52 GMT
	I0116 03:43:52.148024  787862 round_trippers.go:580]     Audit-Id: 945d4acd-a47a-4242-8ffc-4569d1faf87f
	I0116 03:43:52.148163  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:52.645235  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:52.645257  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:52.645267  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:52.645274  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:52.647579  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:52.647599  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:52.647607  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:52.647614  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:52 GMT
	I0116 03:43:52.647620  787862 round_trippers.go:580]     Audit-Id: 8edca115-31f1-44df-9ded-9ac559fcca03
	I0116 03:43:52.647626  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:52.647632  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:52.647638  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:52.647766  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:52.648192  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:53.145614  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:53.145636  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:53.145646  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:53.145653  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:53.148025  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:53.148043  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:53.148051  787862 round_trippers.go:580]     Audit-Id: 376bf7f8-c56d-4a96-98c5-8dc863f73970
	I0116 03:43:53.148059  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:53.148086  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:53.148092  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:53.148098  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:53.148104  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:53 GMT
	I0116 03:43:53.148256  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:53.645262  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:53.645285  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:53.645294  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:53.645301  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:53.647815  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:53.647837  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:53.647846  787862 round_trippers.go:580]     Audit-Id: 3d4c50bc-68a0-4ef8-9a07-ddf78685c864
	I0116 03:43:53.647852  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:53.647859  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:53.647865  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:53.647871  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:53.647877  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:53 GMT
	I0116 03:43:53.647994  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:54.145386  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:54.145411  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:54.145422  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:54.145429  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:54.147928  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:54.147953  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:54.147961  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:54.147974  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:54 GMT
	I0116 03:43:54.147982  787862 round_trippers.go:580]     Audit-Id: 16e73113-3a7a-4835-8f77-406e33db1857
	I0116 03:43:54.147988  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:54.147996  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:54.148002  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:54.148178  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:54.645298  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:54.645326  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:54.645336  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:54.645343  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:54.647694  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:54.647722  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:54.647731  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:54.647738  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:54.647745  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:54 GMT
	I0116 03:43:54.647751  787862 round_trippers.go:580]     Audit-Id: 2978d346-b0c6-4bf0-9fcd-9c474c4ca1f2
	I0116 03:43:54.647764  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:54.647770  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:54.648011  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:54.648434  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:55.146147  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:55.146170  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:55.146180  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:55.146187  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:55.148637  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:55.148657  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:55.148665  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:55 GMT
	I0116 03:43:55.148672  787862 round_trippers.go:580]     Audit-Id: dae443ce-4a7f-4e15-84a9-8e9a0d5998ae
	I0116 03:43:55.148678  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:55.148684  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:55.148689  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:55.148697  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:55.149080  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:55.645863  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:55.645884  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:55.645893  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:55.645900  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:55.648119  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:55.648141  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:55.648149  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:55.648156  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:55.648162  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:55.648169  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:55 GMT
	I0116 03:43:55.648178  787862 round_trippers.go:580]     Audit-Id: d3a8c299-ffa9-41f7-9022-9ef681479974
	I0116 03:43:55.648188  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:55.648500  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:56.145734  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:56.145757  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:56.145767  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:56.145774  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:56.148210  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:56.148232  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:56.148241  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:56.148251  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:56.148257  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:56.148264  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:56 GMT
	I0116 03:43:56.148271  787862 round_trippers.go:580]     Audit-Id: ccc004eb-ca0f-4131-b0c3-be63e49e9d15
	I0116 03:43:56.148278  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:56.148696  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:56.645856  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:56.645877  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:56.645887  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:56.645895  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:56.648029  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:56.648054  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:56.648080  787862 round_trippers.go:580]     Audit-Id: 81c313b3-736f-4a0f-8350-39aa3c5cc9ec
	I0116 03:43:56.648088  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:56.648094  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:56.648101  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:56.648111  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:56.648125  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:56 GMT
	I0116 03:43:56.648250  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:56.648649  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:57.145343  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:57.145364  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:57.145374  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:57.145382  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:57.148006  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:57.148028  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:57.148036  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:57 GMT
	I0116 03:43:57.148042  787862 round_trippers.go:580]     Audit-Id: 98620547-3dfe-4fc1-8391-f36e9b5399ca
	I0116 03:43:57.148049  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:57.148055  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:57.148073  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:57.148085  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:57.148188  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:57.645285  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:57.645309  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:57.645319  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:57.645326  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:57.647687  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:57.647710  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:57.647718  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:57.647724  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:57.647730  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:57.647737  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:57.647750  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:57 GMT
	I0116 03:43:57.647756  787862 round_trippers.go:580]     Audit-Id: e845df7c-26ab-49cf-9381-09a2d24a00c7
	I0116 03:43:57.647846  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:58.145982  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:58.146005  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:58.146015  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:58.146022  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:58.148377  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:58.148405  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:58.148413  787862 round_trippers.go:580]     Audit-Id: 58e36372-766d-4b95-8e9f-1cda6c89bbc7
	I0116 03:43:58.148420  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:58.148430  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:58.148437  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:58.148443  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:58.148450  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:58 GMT
	I0116 03:43:58.148756  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:58.645680  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:58.645700  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:58.645709  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:58.645716  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:58.647998  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:58.648015  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:58.648022  787862 round_trippers.go:580]     Audit-Id: bf07debe-77bf-4d8f-becd-5c351edf6fb4
	I0116 03:43:58.648029  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:58.648035  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:58.648041  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:58.648047  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:58.648053  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:58 GMT
	I0116 03:43:58.648202  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:59.145253  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:59.145274  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:59.145284  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:59.145291  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:59.147593  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:59.147611  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:59.147626  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:59.147633  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:59.147640  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:59.147647  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:59 GMT
	I0116 03:43:59.147653  787862 round_trippers.go:580]     Audit-Id: 4317739e-c2e3-414e-b3a1-189ef3bf6e03
	I0116 03:43:59.147659  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:59.147777  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:43:59.148190  787862 node_ready.go:58] node "multinode-741097-m02" has status "Ready":"False"
	I0116 03:43:59.645811  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:43:59.645842  787862 round_trippers.go:469] Request Headers:
	I0116 03:43:59.645852  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:43:59.645859  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:43:59.648089  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:43:59.648112  787862 round_trippers.go:577] Response Headers:
	I0116 03:43:59.648120  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:43:59.648127  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:43:59.648133  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:43:59 GMT
	I0116 03:43:59.648140  787862 round_trippers.go:580]     Audit-Id: c2bb1712-c3ef-4b95-8699-ec42aa82b807
	I0116 03:43:59.648147  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:43:59.648153  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:43:59.648259  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:44:00.145278  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:44:00.145305  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:00.145315  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:00.145322  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:00.148128  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:00.148153  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:00.148162  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:00.148168  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:00.148174  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:00.148180  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:00.148188  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:00 GMT
	I0116 03:44:00.148196  787862 round_trippers.go:580]     Audit-Id: d825231d-8525-4f60-b751-63d0f89c658a
	I0116 03:44:00.148348  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:44:00.645225  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:44:00.645257  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:00.645267  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:00.645274  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:00.647649  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:00.647670  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:00.647678  787862 round_trippers.go:580]     Audit-Id: 9cdea1e0-16cb-4c9f-81a9-232b16b3a2c0
	I0116 03:44:00.647685  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:00.647691  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:00.647697  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:00.647704  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:00.647714  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:00 GMT
	I0116 03:44:00.647814  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"509","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 03:44:01.145250  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:44:01.145273  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.145284  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.145291  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.147773  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:01.147795  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.147803  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.147810  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.147816  787862 round_trippers.go:580]     Audit-Id: ad6dec66-466f-495e-bafe-d5b15d4b2fda
	I0116 03:44:01.147822  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.147832  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.147838  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.147951  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"532","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I0116 03:44:01.148359  787862 node_ready.go:49] node "multinode-741097-m02" has status "Ready":"True"
	I0116 03:44:01.148377  787862 node_ready.go:38] duration metric: took 31.003346346s waiting for node "multinode-741097-m02" to be "Ready" ...
	I0116 03:44:01.148387  787862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:01.148454  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 03:44:01.148463  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.148472  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.148478  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.151660  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:44:01.151683  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.151690  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.151697  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.151703  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.151709  787862 round_trippers.go:580]     Audit-Id: ab0b8b3b-4190-4e9b-a26c-806e512d84ed
	I0116 03:44:01.151716  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.151722  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.152455  787862 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2z5xs","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3","resourceVersion":"444","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c85c0644-575e-40d8-9912-1bb96f25128f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85c0644-575e-40d8-9912-1bb96f25128f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0116 03:44:01.155379  787862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2z5xs" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.155464  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2z5xs
	I0116 03:44:01.155474  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.155483  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.155490  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.157863  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:01.157894  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.157902  787862 round_trippers.go:580]     Audit-Id: f103a136-1d5d-4ff3-980d-6009a951b0f9
	I0116 03:44:01.157908  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.157915  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.157923  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.157935  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.157942  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.158124  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2z5xs","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3","resourceVersion":"444","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c85c0644-575e-40d8-9912-1bb96f25128f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c85c0644-575e-40d8-9912-1bb96f25128f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0116 03:44:01.158615  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:44:01.158633  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.158641  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.158653  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.160624  787862 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:44:01.160640  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.160648  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.160654  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.160660  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.160667  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.160673  787862 round_trippers.go:580]     Audit-Id: 09b2a073-7efb-4fce-900c-1e07b24245f8
	I0116 03:44:01.160679  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.160768  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:44:01.161128  787862 pod_ready.go:92] pod "coredns-5dd5756b68-2z5xs" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:01.161138  787862 pod_ready.go:81] duration metric: took 5.732128ms waiting for pod "coredns-5dd5756b68-2z5xs" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.161147  787862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.161199  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-741097
	I0116 03:44:01.161204  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.161210  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.161218  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.163228  787862 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:44:01.163278  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.163318  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.163346  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.163365  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.163377  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.163384  787862 round_trippers.go:580]     Audit-Id: c83b7e31-8b5e-4495-ac3d-5a95e04ec21e
	I0116 03:44:01.163390  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.163464  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-741097","namespace":"kube-system","uid":"e88b8e2a-3aa3-4ddc-93aa-e8119b68034e","resourceVersion":"318","creationTimestamp":"2024-01-16T03:42:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ae5a83c589ec43e0eae7b90d3c11eb5e","kubernetes.io/config.mirror":"ae5a83c589ec43e0eae7b90d3c11eb5e","kubernetes.io/config.seen":"2024-01-16T03:42:28.150313420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0116 03:44:01.163873  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:44:01.163888  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.163895  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.163902  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.165822  787862 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:44:01.165839  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.165846  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.165852  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.165858  787862 round_trippers.go:580]     Audit-Id: 689ea123-20bb-4852-a82a-71cc46314718
	I0116 03:44:01.165865  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.165871  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.165877  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.166012  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:44:01.166367  787862 pod_ready.go:92] pod "etcd-multinode-741097" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:01.166377  787862 pod_ready.go:81] duration metric: took 5.223136ms waiting for pod "etcd-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.166391  787862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.166442  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-741097
	I0116 03:44:01.166447  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.166454  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.166460  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.168299  787862 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:44:01.168314  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.168322  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.168329  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.168334  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.168340  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.168346  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.168353  787862 round_trippers.go:580]     Audit-Id: 9773724d-6ef2-4ef2-a0c9-fadd0d1ee83a
	I0116 03:44:01.168475  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-741097","namespace":"kube-system","uid":"15831bc8-c5f4-4288-adf2-c5af42d05ebb","resourceVersion":"328","creationTimestamp":"2024-01-16T03:42:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ac7ff780fba56f1234d32f5a3c8a2527","kubernetes.io/config.mirror":"ac7ff780fba56f1234d32f5a3c8a2527","kubernetes.io/config.seen":"2024-01-16T03:42:20.538989474Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0116 03:44:01.168964  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:44:01.168972  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.168980  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.168987  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.170949  787862 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:44:01.171002  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.171025  787862 round_trippers.go:580]     Audit-Id: 1138bb5f-ebd1-4103-9573-314cad9af448
	I0116 03:44:01.171039  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.171045  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.171051  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.171076  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.171093  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.171202  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:44:01.171583  787862 pod_ready.go:92] pod "kube-apiserver-multinode-741097" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:01.171599  787862 pod_ready.go:81] duration metric: took 5.200801ms waiting for pod "kube-apiserver-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.171609  787862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.171661  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-741097
	I0116 03:44:01.171673  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.171680  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.171687  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.173711  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:01.173728  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.173736  787862 round_trippers.go:580]     Audit-Id: 8633ecf5-b673-4894-99b3-3b574dfa0af4
	I0116 03:44:01.173742  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.173748  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.173754  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.173767  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.173786  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.173923  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-741097","namespace":"kube-system","uid":"848561d9-4f18-415a-afb3-a1697ab9738a","resourceVersion":"323","creationTimestamp":"2024-01-16T03:42:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"245252063dc61aa82cf10d0a0b149c59","kubernetes.io/config.mirror":"245252063dc61aa82cf10d0a0b149c59","kubernetes.io/config.seen":"2024-01-16T03:42:28.150319747Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0116 03:44:01.174395  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:44:01.174409  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.174417  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.174424  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.176232  787862 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:44:01.176251  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.176258  787862 round_trippers.go:580]     Audit-Id: a12dfcc9-a33d-45eb-948f-88243f23542e
	I0116 03:44:01.176265  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.176271  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.176277  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.176283  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.176291  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.176471  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:44:01.176879  787862 pod_ready.go:92] pod "kube-controller-manager-multinode-741097" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:01.176896  787862 pod_ready.go:81] duration metric: took 5.278053ms waiting for pod "kube-controller-manager-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.176906  787862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cm64c" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.346276  787862 request.go:629] Waited for 169.304711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cm64c
	I0116 03:44:01.346422  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cm64c
	I0116 03:44:01.346456  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.346483  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.346501  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.350288  787862 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:44:01.350345  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.350375  787862 round_trippers.go:580]     Audit-Id: 41ee9656-02b4-45e4-9eb2-718653316d2c
	I0116 03:44:01.350394  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.350423  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.350441  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.350469  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.350481  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.350636  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cm64c","generateName":"kube-proxy-","namespace":"kube-system","uid":"07b12aa4-20cf-4db6-8c2b-80085bc219a5","resourceVersion":"411","creationTimestamp":"2024-01-16T03:42:41Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6c0da6ec-1f97-48d4-bc73-49dc78d5a834","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6c0da6ec-1f97-48d4-bc73-49dc78d5a834\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0116 03:44:01.545630  787862 request.go:629] Waited for 194.497904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:44:01.545738  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:44:01.545775  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.545791  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.545799  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.548239  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:01.548289  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.548309  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.548326  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.548346  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.548379  787862 round_trippers.go:580]     Audit-Id: 00565912-8ca6-45df-bf96-d1065428f254
	I0116 03:44:01.548410  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.548430  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.548614  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:44:01.549048  787862 pod_ready.go:92] pod "kube-proxy-cm64c" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:01.549091  787862 pod_ready.go:81] duration metric: took 372.178323ms waiting for pod "kube-proxy-cm64c" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.549116  787862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zcv72" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.745470  787862 request.go:629] Waited for 196.24711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcv72
	I0116 03:44:01.745538  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zcv72
	I0116 03:44:01.745550  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.745575  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.745587  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.748128  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:01.748211  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.748228  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.748236  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.748243  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.748264  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.748278  787862 round_trippers.go:580]     Audit-Id: d0050de5-f82c-4fa4-98eb-e21179b21ddd
	I0116 03:44:01.748285  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.748434  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zcv72","generateName":"kube-proxy-","namespace":"kube-system","uid":"3ed656c1-332b-4c24-886f-5e7f6269a802","resourceVersion":"496","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"6c0da6ec-1f97-48d4-bc73-49dc78d5a834","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6c0da6ec-1f97-48d4-bc73-49dc78d5a834\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0116 03:44:01.946262  787862 request.go:629] Waited for 197.324821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:44:01.946319  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097-m02
	I0116 03:44:01.946325  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:01.946334  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:01.946345  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:01.948754  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:01.948803  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:01.948823  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:01 GMT
	I0116 03:44:01.948843  787862 round_trippers.go:580]     Audit-Id: 3160f3f8-cc3d-40b7-bb27-e878c38367f8
	I0116 03:44:01.948859  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:01.948886  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:01.948908  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:01.948925  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:01.949738  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097-m02","uid":"7c0c07c2-ae7e-42e9-af79-b0fc1db36867","resourceVersion":"533","creationTimestamp":"2024-01-16T03:43:29Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_43_29_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:43:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5810 chars]
	I0116 03:44:01.950131  787862 pod_ready.go:92] pod "kube-proxy-zcv72" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:01.950150  787862 pod_ready.go:81] duration metric: took 401.017409ms waiting for pod "kube-proxy-zcv72" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:01.950160  787862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:02.146055  787862 request.go:629] Waited for 195.824708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-741097
	I0116 03:44:02.146131  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-741097
	I0116 03:44:02.146142  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:02.146151  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:02.146158  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:02.148732  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:02.148760  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:02.148769  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:02.148776  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:02.148793  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:02.148801  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:02 GMT
	I0116 03:44:02.148807  787862 round_trippers.go:580]     Audit-Id: 23305b43-18dd-46d1-b624-9caae0ddcc23
	I0116 03:44:02.148814  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:02.149922  787862 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-741097","namespace":"kube-system","uid":"1cd5ce84-044b-4867-be0b-45f71f0946b9","resourceVersion":"320","creationTimestamp":"2024-01-16T03:42:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8aac5e6028eb628a89e364f5d125fbc0","kubernetes.io/config.mirror":"8aac5e6028eb628a89e364f5d125fbc0","kubernetes.io/config.seen":"2024-01-16T03:42:28.150320674Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:42:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0116 03:44:02.345662  787862 request.go:629] Waited for 195.311719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:44:02.345738  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-741097
	I0116 03:44:02.345749  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:02.345758  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:02.345769  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:02.348259  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:02.348283  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:02.348291  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:02.348297  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:02.348309  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:02.348319  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:02 GMT
	I0116 03:44:02.348325  787862 round_trippers.go:580]     Audit-Id: e9847327-3d51-457d-bd5f-d60eddfdd1c2
	I0116 03:44:02.348334  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:02.348684  787862 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T03:42:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 03:44:02.349114  787862 pod_ready.go:92] pod "kube-scheduler-multinode-741097" in "kube-system" namespace has status "Ready":"True"
	I0116 03:44:02.349129  787862 pod_ready.go:81] duration metric: took 398.958752ms waiting for pod "kube-scheduler-multinode-741097" in "kube-system" namespace to be "Ready" ...
	I0116 03:44:02.349142  787862 pod_ready.go:38] duration metric: took 1.200734598s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:44:02.349158  787862 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:44:02.349223  787862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:44:02.363330  787862 system_svc.go:56] duration metric: took 14.164469ms WaitForService to wait for kubelet.
	I0116 03:44:02.363355  787862 kubeadm.go:581] duration metric: took 32.246205313s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:44:02.363374  787862 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:44:02.545747  787862 request.go:629] Waited for 182.302114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0116 03:44:02.545839  787862 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0116 03:44:02.545850  787862 round_trippers.go:469] Request Headers:
	I0116 03:44:02.545859  787862 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:44:02.545882  787862 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 03:44:02.548828  787862 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:44:02.548850  787862 round_trippers.go:577] Response Headers:
	I0116 03:44:02.548860  787862 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:44:02 GMT
	I0116 03:44:02.548866  787862 round_trippers.go:580]     Audit-Id: 51787562-d29f-4da6-9d84-36d9c514f091
	I0116 03:44:02.548873  787862 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:44:02.548883  787862 round_trippers.go:580]     Content-Type: application/json
	I0116 03:44:02.548892  787862 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c3558471-7c39-46a4-a695-d294805cb1fc
	I0116 03:44:02.548986  787862 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 695c7bfe-26fd-4079-8c48-8caa492ed384
	I0116 03:44:02.549726  787862 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"534"},"items":[{"metadata":{"name":"multinode-741097","uid":"bf1f5e1b-e568-421f-a863-91ad11c1a449","resourceVersion":"424","creationTimestamp":"2024-01-16T03:42:25Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-741097","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-741097","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T03_42_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12884 chars]
	I0116 03:44:02.550475  787862 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 03:44:02.550498  787862 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:02.550508  787862 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 03:44:02.550519  787862 node_conditions.go:123] node cpu capacity is 2
	I0116 03:44:02.550528  787862 node_conditions.go:105] duration metric: took 187.14777ms to run NodePressure ...
	I0116 03:44:02.550539  787862 start.go:228] waiting for startup goroutines ...
	I0116 03:44:02.550568  787862 start.go:242] writing updated cluster config ...
	I0116 03:44:02.550886  787862 ssh_runner.go:195] Run: rm -f paused
	I0116 03:44:02.613779  787862 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:44:02.617978  787862 out.go:177] * Done! kubectl is now configured to use "multinode-741097" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 16 03:43:13 multinode-741097 crio[901]: time="2024-01-16 03:43:13.262456217Z" level=info msg="Starting container: fa9e28ad7477ecbe3e7716537aff0385a3f6e496192374edc606571c6a621230" id=1518dc58-d892-4efa-b1c8-a47265bd6a35 name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 03:43:13 multinode-741097 crio[901]: time="2024-01-16 03:43:13.273389769Z" level=info msg="Created container cdd13348c95ecbf560f463fa51ca76067d7446a16a8aebe80a39703d48f74824: kube-system/coredns-5dd5756b68-2z5xs/coredns" id=65c5f0f1-4a29-4e80-abd9-01c54b652115 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 03:43:13 multinode-741097 crio[901]: time="2024-01-16 03:43:13.274190235Z" level=info msg="Starting container: cdd13348c95ecbf560f463fa51ca76067d7446a16a8aebe80a39703d48f74824" id=a5db1417-d497-4c4e-964e-1633334c51ab name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 03:43:13 multinode-741097 crio[901]: time="2024-01-16 03:43:13.282926372Z" level=info msg="Started container" PID=1927 containerID=fa9e28ad7477ecbe3e7716537aff0385a3f6e496192374edc606571c6a621230 description=kube-system/storage-provisioner/storage-provisioner id=1518dc58-d892-4efa-b1c8-a47265bd6a35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4b9bf740dd0d273e7a6ac87d3eadf8efcaf4b6e8dd4c56c365cb1e5da2c17e7d
	Jan 16 03:43:13 multinode-741097 crio[901]: time="2024-01-16 03:43:13.284789294Z" level=info msg="Started container" PID=1936 containerID=cdd13348c95ecbf560f463fa51ca76067d7446a16a8aebe80a39703d48f74824 description=kube-system/coredns-5dd5756b68-2z5xs/coredns id=a5db1417-d497-4c4e-964e-1633334c51ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e2a331e64ec8b63a88861bd7e520469fac139cd018a76b335ab0ea07a2e737b
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.806331675Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-5xhls/POD" id=0e84374e-f2e4-470b-a540-d43020092197 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.806390646Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.838567586Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-5xhls Namespace:default ID:6ccf73c79a26f09120774ef379892c736cbfc95a3eed4857dd949f246f64a097 UID:fa119058-896b-4bdc-ba1c-ec1a1c512cf2 NetNS:/var/run/netns/a68c5578-2f6e-411a-922d-5b4add64a21f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.838605010Z" level=info msg="Adding pod default_busybox-5bc68d56bd-5xhls to CNI network \"kindnet\" (type=ptp)"
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.847382141Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-5xhls Namespace:default ID:6ccf73c79a26f09120774ef379892c736cbfc95a3eed4857dd949f246f64a097 UID:fa119058-896b-4bdc-ba1c-ec1a1c512cf2 NetNS:/var/run/netns/a68c5578-2f6e-411a-922d-5b4add64a21f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.847521949Z" level=info msg="Checking pod default_busybox-5bc68d56bd-5xhls for CNI network kindnet (type=ptp)"
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.850966389Z" level=info msg="Ran pod sandbox 6ccf73c79a26f09120774ef379892c736cbfc95a3eed4857dd949f246f64a097 with infra container: default/busybox-5bc68d56bd-5xhls/POD" id=0e84374e-f2e4-470b-a540-d43020092197 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.851882654Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=daa4ba02-9486-4485-9df3-65eda1eff1d6 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.852222948Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=daa4ba02-9486-4485-9df3-65eda1eff1d6 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.853075688Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=41b61bae-975c-4c17-a6a9-7e2552e20383 name=/runtime.v1.ImageService/PullImage
	Jan 16 03:44:03 multinode-741097 crio[901]: time="2024-01-16 03:44:03.853986686Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 16 03:44:04 multinode-741097 crio[901]: time="2024-01-16 03:44:04.483504217Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 16 03:44:05 multinode-741097 crio[901]: time="2024-01-16 03:44:05.677672207Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=41b61bae-975c-4c17-a6a9-7e2552e20383 name=/runtime.v1.ImageService/PullImage
	Jan 16 03:44:05 multinode-741097 crio[901]: time="2024-01-16 03:44:05.678568976Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1e75d366-10f8-4557-94f1-b357801c42f9 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 03:44:05 multinode-741097 crio[901]: time="2024-01-16 03:44:05.679155385Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1e75d366-10f8-4557-94f1-b357801c42f9 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 03:44:05 multinode-741097 crio[901]: time="2024-01-16 03:44:05.679833502Z" level=info msg="Creating container: default/busybox-5bc68d56bd-5xhls/busybox" id=c0c1652a-252c-4d10-88e3-6685aa80f7ec name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 03:44:05 multinode-741097 crio[901]: time="2024-01-16 03:44:05.679924038Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 03:44:05 multinode-741097 crio[901]: time="2024-01-16 03:44:05.737007384Z" level=info msg="Created container ed22de4bc4e286745cd7539ab00aafba9c73f81a7b9ef66520c0f21d029e39f3: default/busybox-5bc68d56bd-5xhls/busybox" id=c0c1652a-252c-4d10-88e3-6685aa80f7ec name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 03:44:05 multinode-741097 crio[901]: time="2024-01-16 03:44:05.737727734Z" level=info msg="Starting container: ed22de4bc4e286745cd7539ab00aafba9c73f81a7b9ef66520c0f21d029e39f3" id=fede936c-f33e-49c5-930a-895ff7a58401 name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 03:44:05 multinode-741097 crio[901]: time="2024-01-16 03:44:05.746008976Z" level=info msg="Started container" PID=2086 containerID=ed22de4bc4e286745cd7539ab00aafba9c73f81a7b9ef66520c0f21d029e39f3 description=default/busybox-5bc68d56bd-5xhls/busybox id=fede936c-f33e-49c5-930a-895ff7a58401 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ccf73c79a26f09120774ef379892c736cbfc95a3eed4857dd949f246f64a097
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ed22de4bc4e28       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   6ccf73c79a26f       busybox-5bc68d56bd-5xhls
	cdd13348c95ec       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      57 seconds ago       Running             coredns                   0                   4e2a331e64ec8       coredns-5dd5756b68-2z5xs
	fa9e28ad7477e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      57 seconds ago       Running             storage-provisioner       0                   4b9bf740dd0d2       storage-provisioner
	96413a8491ec7       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      About a minute ago   Running             kube-proxy                0                   e4379e85d22d6       kube-proxy-cm64c
	0fb694305f8bd       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   853f66bd29ef2       kindnet-g8srb
	ae097f9796d05       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   b708cbb3b7cee       kube-controller-manager-multinode-741097
	61ae06d6a2da6       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   3069fd3cd01d6       kube-apiserver-multinode-741097
	5ca261dc99764       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   631249bb27240       etcd-multinode-741097
	7f5c04dc48c44       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   ce88c356a80ec       kube-scheduler-multinode-741097
	
	
	==> coredns [cdd13348c95ecbf560f463fa51ca76067d7446a16a8aebe80a39703d48f74824] <==
	[INFO] 10.244.0.3:51317 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103697s
	[INFO] 10.244.1.2:58182 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133179s
	[INFO] 10.244.1.2:48772 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001004495s
	[INFO] 10.244.1.2:42587 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074011s
	[INFO] 10.244.1.2:36624 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069342s
	[INFO] 10.244.1.2:56003 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000738057s
	[INFO] 10.244.1.2:41688 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073494s
	[INFO] 10.244.1.2:36886 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000752s
	[INFO] 10.244.1.2:53661 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004599s
	[INFO] 10.244.0.3:33630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134417s
	[INFO] 10.244.0.3:56746 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060866s
	[INFO] 10.244.0.3:56311 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060702s
	[INFO] 10.244.0.3:59379 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041879s
	[INFO] 10.244.1.2:41004 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145716s
	[INFO] 10.244.1.2:44049 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073207s
	[INFO] 10.244.1.2:35065 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007785s
	[INFO] 10.244.1.2:47490 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006542s
	[INFO] 10.244.0.3:53027 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128116s
	[INFO] 10.244.0.3:35409 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086507s
	[INFO] 10.244.0.3:37497 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076562s
	[INFO] 10.244.0.3:54315 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082577s
	[INFO] 10.244.1.2:51131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158992s
	[INFO] 10.244.1.2:37430 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000075628s
	[INFO] 10.244.1.2:38144 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077317s
	[INFO] 10.244.1.2:60912 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069522s
	
	
	==> describe nodes <==
	Name:               multinode-741097
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-741097
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-741097
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_42_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:42:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-741097
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:44:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:43:12 +0000   Tue, 16 Jan 2024 03:42:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:43:12 +0000   Tue, 16 Jan 2024 03:42:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:43:12 +0000   Tue, 16 Jan 2024 03:42:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:43:12 +0000   Tue, 16 Jan 2024 03:43:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-741097
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 19ad7f900556444d83dcc8f413b386db
	  System UUID:                04c15e61-d5cd-4e89-a0fe-ee9e3035ce76
	  Boot ID:                    8bf0f894-1a91-4593-91c4-b833f91013d1
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-5xhls                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-2z5xs                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     89s
	  kube-system                 etcd-multinode-741097                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kindnet-g8srb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      89s
	  kube-system                 kube-apiserver-multinode-741097             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-multinode-741097    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-proxy-cm64c                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-multinode-741097             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 87s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  110s (x8 over 110s)  kubelet          Node multinode-741097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s (x8 over 110s)  kubelet          Node multinode-741097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s (x8 over 110s)  kubelet          Node multinode-741097 status is now: NodeHasSufficientPID
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node multinode-741097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node multinode-741097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node multinode-741097 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           89s                  node-controller  Node multinode-741097 event: Registered Node multinode-741097 in Controller
	  Normal  NodeReady                58s                  kubelet          Node multinode-741097 status is now: NodeReady
	
	
	Name:               multinode-741097-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-741097-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-741097
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T03_43_29_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:43:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-741097-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:44:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:44:00 +0000   Tue, 16 Jan 2024 03:43:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:44:00 +0000   Tue, 16 Jan 2024 03:43:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:44:00 +0000   Tue, 16 Jan 2024 03:43:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:44:00 +0000   Tue, 16 Jan 2024 03:44:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-741097-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 0027833191354aabab8d2bb385e54347
	  System UUID:                8a2b36bf-9a1c-436e-9b0a-c5bdcc39ba8f
	  Boot ID:                    8bf0f894-1a91-4593-91c4-b833f91013d1
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-zwvv5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-t7pdg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      41s
	  kube-system                 kube-proxy-zcv72            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  NodeHasSufficientMemory  41s (x5 over 43s)  kubelet          Node multinode-741097-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x5 over 43s)  kubelet          Node multinode-741097-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x5 over 43s)  kubelet          Node multinode-741097-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node multinode-741097-m02 event: Registered Node multinode-741097-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-741097-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001097] FS-Cache: O-key=[8] 'c570ed0000000000'
	[  +0.000749] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001056] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=00000000c6cf4d72
	[  +0.001100] FS-Cache: N-key=[8] 'c570ed0000000000'
	[  +0.004636] FS-Cache: Duplicate cookie detected
	[  +0.000709] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001231] FS-Cache: O-cookie d=0000000029d6e22a{9p.inode} n=000000008f0122ea
	[  +0.001197] FS-Cache: O-key=[8] 'c570ed0000000000'
	[  +0.000769] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000976] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=000000004bbac6e1
	[  +0.001161] FS-Cache: N-key=[8] 'c570ed0000000000'
	[  +2.789451] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=00000029 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000968] FS-Cache: O-cookie d=0000000029d6e22a{9p.inode} n=00000000b0405fe5
	[  +0.001144] FS-Cache: O-key=[8] 'c470ed0000000000'
	[  +0.000705] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=00000000c6cf4d72
	[  +0.001064] FS-Cache: N-key=[8] 'c470ed0000000000'
	[  +0.345600] FS-Cache: Duplicate cookie detected
	[  +0.000705] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001146] FS-Cache: O-cookie d=0000000029d6e22a{9p.inode} n=00000000fd1f03d1
	[  +0.001221] FS-Cache: O-key=[8] 'ca70ed0000000000'
	[  +0.000720] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=0000000029d6e22a{9p.inode} n=00000000057ce508
	[  +0.001118] FS-Cache: N-key=[8] 'ca70ed0000000000'
	
	
	==> etcd [5ca261dc9976412f6318ea630868bbf5ae4bea7fc562ff458fdeb9d21a4363cd] <==
	{"level":"info","ts":"2024-01-16T03:42:21.359974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-16T03:42:21.360091Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-16T03:42:21.362023Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T03:42:21.362125Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-16T03:42:21.362149Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-16T03:42:21.362787Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T03:42:21.362828Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T03:42:22.027467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T03:42:22.027581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T03:42:22.027633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-16T03:42:22.027675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T03:42:22.027707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-16T03:42:22.027745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T03:42:22.027779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-16T03:42:22.032241Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-741097 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:42:22.032357Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:42:22.033376Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:42:22.033491Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:42:22.036093Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:42:22.036165Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:42:22.036279Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:42:22.036327Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:42:22.036993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-16T03:42:22.044084Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:42:22.04411Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 03:44:11 up  3:26,  0 users,  load average: 1.00, 1.40, 1.65
	Linux multinode-741097 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [0fb694305f8bd15f5b5bf0530e4de80d859bb81e998d1a82c9e929e4a27b273d] <==
	I0116 03:42:42.192343       1 main.go:116] setting mtu 1500 for CNI 
	I0116 03:42:42.192442       1 main.go:146] kindnetd IP family: "ipv4"
	I0116 03:42:42.193653       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0116 03:43:12.385710       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0116 03:43:12.398923       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 03:43:12.398949       1 main.go:227] handling current node
	I0116 03:43:22.416360       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 03:43:22.416486       1 main.go:227] handling current node
	I0116 03:43:32.428699       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 03:43:32.428726       1 main.go:227] handling current node
	I0116 03:43:32.428736       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 03:43:32.428742       1 main.go:250] Node multinode-741097-m02 has CIDR [10.244.1.0/24] 
	I0116 03:43:32.428893       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0116 03:43:42.441499       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 03:43:42.441624       1 main.go:227] handling current node
	I0116 03:43:42.441647       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 03:43:42.441654       1 main.go:250] Node multinode-741097-m02 has CIDR [10.244.1.0/24] 
	I0116 03:43:52.450845       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 03:43:52.450873       1 main.go:227] handling current node
	I0116 03:43:52.450883       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 03:43:52.450889       1 main.go:250] Node multinode-741097-m02 has CIDR [10.244.1.0/24] 
	I0116 03:44:02.459894       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 03:44:02.459924       1 main.go:227] handling current node
	I0116 03:44:02.459934       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 03:44:02.459945       1 main.go:250] Node multinode-741097-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [61ae06d6a2da632d93ca62f210eb7d103655bee624906892d446709118d16787] <==
	I0116 03:42:25.984337       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0116 03:42:25.984364       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 03:42:26.461171       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 03:42:26.500418       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 03:42:26.589234       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0116 03:42:26.596576       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0116 03:42:26.597599       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 03:42:26.602524       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 03:42:27.224341       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 03:42:28.078888       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 03:42:28.090759       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0116 03:42:28.109078       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	E0116 03:42:35.268633       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	I0116 03:42:41.317266       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0116 03:42:41.398386       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E0116 03:42:45.269619       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0116 03:42:55.270802       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0116 03:43:05.271661       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0116 03:43:15.272533       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0116 03:43:25.273040       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0116 03:43:35.273327       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0116 03:43:45.274335       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0116 03:43:55.275488       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0116 03:44:05.276459       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0116 03:44:08.728233       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:56594: write: broken pipe
	
	
	==> kube-controller-manager [ae097f9796d05f0d2950606e3ec64b14477d026ff207ffa3e2ef4930f8952ebc] <==
	I0116 03:43:12.844882       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.705µs"
	I0116 03:43:12.860347       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.065µs"
	I0116 03:43:13.397458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.39µs"
	I0116 03:43:14.409952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.068577ms"
	I0116 03:43:14.410486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.865µs"
	I0116 03:43:16.329412       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0116 03:43:29.021309       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-741097-m02\" does not exist"
	I0116 03:43:29.038911       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-741097-m02" podCIDRs=["10.244.1.0/24"]
	I0116 03:43:29.043636       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t7pdg"
	I0116 03:43:29.051720       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zcv72"
	I0116 03:43:31.330989       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-741097-m02"
	I0116 03:43:31.331127       1 event.go:307] "Event occurred" object="multinode-741097-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-741097-m02 event: Registered Node multinode-741097-m02 in Controller"
	I0116 03:44:00.728931       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-741097-m02"
	I0116 03:44:03.454067       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0116 03:44:03.475724       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-zwvv5"
	I0116 03:44:03.483113       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-5xhls"
	I0116 03:44:03.515877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.418573ms"
	I0116 03:44:03.524292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.282735ms"
	I0116 03:44:03.524649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.838µs"
	I0116 03:44:03.529223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.188µs"
	I0116 03:44:03.536269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.757µs"
	I0116 03:44:05.605404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.97073ms"
	I0116 03:44:05.605484       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.799µs"
	I0116 03:44:06.485375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.248884ms"
	I0116 03:44:06.485541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.848µs"
	
	
	==> kube-proxy [96413a8491ec7e9ebdf0ed9b214497505a043b62466c42315f9daeceb298bc8f] <==
	I0116 03:42:43.361504       1 server_others.go:69] "Using iptables proxy"
	I0116 03:42:43.375846       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0116 03:42:43.400434       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0116 03:42:43.402329       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:42:43.402358       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0116 03:42:43.402366       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0116 03:42:43.402411       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:42:43.402648       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:42:43.402664       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:42:43.403976       1 config.go:188] "Starting service config controller"
	I0116 03:42:43.403993       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:42:43.404011       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:42:43.404018       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:42:43.404358       1 config.go:315] "Starting node config controller"
	I0116 03:42:43.404374       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:42:43.504334       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:42:43.504337       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:42:43.504503       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7f5c04dc48c4408b421ff7134068e259d0df2ccd262d5ae8098522b9e09e8ea7] <==
	W0116 03:42:25.228281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:42:25.228296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 03:42:25.231644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:42:25.231679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 03:42:25.231745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:42:25.231761       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 03:42:25.231802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:42:25.231818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 03:42:25.231857       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 03:42:25.231875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 03:42:25.231930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 03:42:25.231947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 03:42:25.231981       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 03:42:25.231995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 03:42:25.232028       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:42:25.232037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 03:42:26.126366       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:42:26.126413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 03:42:26.134531       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:42:26.134633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 03:42:26.179244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:42:26.179359       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 03:42:26.205587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 03:42:26.205625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0116 03:42:26.819038       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 16 03:42:41 multinode-741097 kubelet[1391]: I0116 03:42:41.470205    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gj79\" (UniqueName: \"kubernetes.io/projected/07b12aa4-20cf-4db6-8c2b-80085bc219a5-kube-api-access-5gj79\") pod \"kube-proxy-cm64c\" (UID: \"07b12aa4-20cf-4db6-8c2b-80085bc219a5\") " pod="kube-system/kube-proxy-cm64c"
	Jan 16 03:42:41 multinode-741097 kubelet[1391]: I0116 03:42:41.470232    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8484e68-06a7-4a2b-868a-d81bd13a3656-lib-modules\") pod \"kindnet-g8srb\" (UID: \"f8484e68-06a7-4a2b-868a-d81bd13a3656\") " pod="kube-system/kindnet-g8srb"
	Jan 16 03:42:41 multinode-741097 kubelet[1391]: I0116 03:42:41.470254    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8484e68-06a7-4a2b-868a-d81bd13a3656-xtables-lock\") pod \"kindnet-g8srb\" (UID: \"f8484e68-06a7-4a2b-868a-d81bd13a3656\") " pod="kube-system/kindnet-g8srb"
	Jan 16 03:42:41 multinode-741097 kubelet[1391]: I0116 03:42:41.470278    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07b12aa4-20cf-4db6-8c2b-80085bc219a5-lib-modules\") pod \"kube-proxy-cm64c\" (UID: \"07b12aa4-20cf-4db6-8c2b-80085bc219a5\") " pod="kube-system/kube-proxy-cm64c"
	Jan 16 03:42:41 multinode-741097 kubelet[1391]: I0116 03:42:41.470299    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f8484e68-06a7-4a2b-868a-d81bd13a3656-cni-cfg\") pod \"kindnet-g8srb\" (UID: \"f8484e68-06a7-4a2b-868a-d81bd13a3656\") " pod="kube-system/kindnet-g8srb"
	Jan 16 03:42:41 multinode-741097 kubelet[1391]: I0116 03:42:41.470321    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdgg\" (UniqueName: \"kubernetes.io/projected/f8484e68-06a7-4a2b-868a-d81bd13a3656-kube-api-access-6bdgg\") pod \"kindnet-g8srb\" (UID: \"f8484e68-06a7-4a2b-868a-d81bd13a3656\") " pod="kube-system/kindnet-g8srb"
	Jan 16 03:42:41 multinode-741097 kubelet[1391]: I0116 03:42:41.470346    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07b12aa4-20cf-4db6-8c2b-80085bc219a5-kube-proxy\") pod \"kube-proxy-cm64c\" (UID: \"07b12aa4-20cf-4db6-8c2b-80085bc219a5\") " pod="kube-system/kube-proxy-cm64c"
	Jan 16 03:42:42 multinode-741097 kubelet[1391]: E0116 03:42:42.571983    1391 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jan 16 03:42:42 multinode-741097 kubelet[1391]: E0116 03:42:42.572099    1391 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/07b12aa4-20cf-4db6-8c2b-80085bc219a5-kube-proxy podName:07b12aa4-20cf-4db6-8c2b-80085bc219a5 nodeName:}" failed. No retries permitted until 2024-01-16 03:42:43.072053758 +0000 UTC m=+15.053624634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/07b12aa4-20cf-4db6-8c2b-80085bc219a5-kube-proxy") pod "kube-proxy-cm64c" (UID: "07b12aa4-20cf-4db6-8c2b-80085bc219a5") : failed to sync configmap cache: timed out waiting for the condition
	Jan 16 03:42:43 multinode-741097 kubelet[1391]: W0116 03:42:43.231654    1391 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5/crio-e4379e85d22d6a2db5fdd166410c844fbe3b341b0cf9dc1bb2ac99afd7458102 WatchSource:0}: Error finding container e4379e85d22d6a2db5fdd166410c844fbe3b341b0cf9dc1bb2ac99afd7458102: Status 404 returned error can't find the container with id e4379e85d22d6a2db5fdd166410c844fbe3b341b0cf9dc1bb2ac99afd7458102
	Jan 16 03:42:44 multinode-741097 kubelet[1391]: I0116 03:42:44.348545    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-g8srb" podStartSLOduration=3.348503032 podCreationTimestamp="2024-01-16 03:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:42:42.389644891 +0000 UTC m=+14.371215783" watchObservedRunningTime="2024-01-16 03:42:44.348503032 +0000 UTC m=+16.330073916"
	Jan 16 03:43:12 multinode-741097 kubelet[1391]: I0116 03:43:12.814987    1391 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 16 03:43:12 multinode-741097 kubelet[1391]: I0116 03:43:12.839504    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cm64c" podStartSLOduration=31.83946507 podCreationTimestamp="2024-01-16 03:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:42:44.348962087 +0000 UTC m=+16.330532979" watchObservedRunningTime="2024-01-16 03:43:12.83946507 +0000 UTC m=+44.821035962"
	Jan 16 03:43:12 multinode-741097 kubelet[1391]: I0116 03:43:12.839647    1391 topology_manager.go:215] "Topology Admit Handler" podUID="ed28cba5-d03c-4872-8d43-ac2b9cbde1c3" podNamespace="kube-system" podName="coredns-5dd5756b68-2z5xs"
	Jan 16 03:43:12 multinode-741097 kubelet[1391]: I0116 03:43:12.841502    1391 topology_manager.go:215] "Topology Admit Handler" podUID="a6a472e5-20d5-4ad7-8c69-cfdffeda3c59" podNamespace="kube-system" podName="storage-provisioner"
	Jan 16 03:43:12 multinode-741097 kubelet[1391]: I0116 03:43:12.888122    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed28cba5-d03c-4872-8d43-ac2b9cbde1c3-config-volume\") pod \"coredns-5dd5756b68-2z5xs\" (UID: \"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3\") " pod="kube-system/coredns-5dd5756b68-2z5xs"
	Jan 16 03:43:12 multinode-741097 kubelet[1391]: I0116 03:43:12.888224    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a6a472e5-20d5-4ad7-8c69-cfdffeda3c59-tmp\") pod \"storage-provisioner\" (UID: \"a6a472e5-20d5-4ad7-8c69-cfdffeda3c59\") " pod="kube-system/storage-provisioner"
	Jan 16 03:43:12 multinode-741097 kubelet[1391]: I0116 03:43:12.888264    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29rz\" (UniqueName: \"kubernetes.io/projected/a6a472e5-20d5-4ad7-8c69-cfdffeda3c59-kube-api-access-k29rz\") pod \"storage-provisioner\" (UID: \"a6a472e5-20d5-4ad7-8c69-cfdffeda3c59\") " pod="kube-system/storage-provisioner"
	Jan 16 03:43:12 multinode-741097 kubelet[1391]: I0116 03:43:12.888290    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq6xc\" (UniqueName: \"kubernetes.io/projected/ed28cba5-d03c-4872-8d43-ac2b9cbde1c3-kube-api-access-rq6xc\") pod \"coredns-5dd5756b68-2z5xs\" (UID: \"ed28cba5-d03c-4872-8d43-ac2b9cbde1c3\") " pod="kube-system/coredns-5dd5756b68-2z5xs"
	Jan 16 03:43:13 multinode-741097 kubelet[1391]: W0116 03:43:13.191713    1391 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5/crio-4e2a331e64ec8b63a88861bd7e520469fac139cd018a76b335ab0ea07a2e737b WatchSource:0}: Error finding container 4e2a331e64ec8b63a88861bd7e520469fac139cd018a76b335ab0ea07a2e737b: Status 404 returned error can't find the container with id 4e2a331e64ec8b63a88861bd7e520469fac139cd018a76b335ab0ea07a2e737b
	Jan 16 03:43:13 multinode-741097 kubelet[1391]: I0116 03:43:13.411835    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2z5xs" podStartSLOduration=32.411787516 podCreationTimestamp="2024-01-16 03:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:43:13.397039832 +0000 UTC m=+45.378610724" watchObservedRunningTime="2024-01-16 03:43:13.411787516 +0000 UTC m=+45.393358400"
	Jan 16 03:43:14 multinode-741097 kubelet[1391]: I0116 03:43:14.398166    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.398110236 podCreationTimestamp="2024-01-16 03:42:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:43:13.418872991 +0000 UTC m=+45.400443883" watchObservedRunningTime="2024-01-16 03:43:14.398110236 +0000 UTC m=+46.379681120"
	Jan 16 03:44:03 multinode-741097 kubelet[1391]: I0116 03:44:03.504488    1391 topology_manager.go:215] "Topology Admit Handler" podUID="fa119058-896b-4bdc-ba1c-ec1a1c512cf2" podNamespace="default" podName="busybox-5bc68d56bd-5xhls"
	Jan 16 03:44:03 multinode-741097 kubelet[1391]: I0116 03:44:03.593752    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x6dm\" (UniqueName: \"kubernetes.io/projected/fa119058-896b-4bdc-ba1c-ec1a1c512cf2-kube-api-access-6x6dm\") pod \"busybox-5bc68d56bd-5xhls\" (UID: \"fa119058-896b-4bdc-ba1c-ec1a1c512cf2\") " pod="default/busybox-5bc68d56bd-5xhls"
	Jan 16 03:44:03 multinode-741097 kubelet[1391]: W0116 03:44:03.849217    1391 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5/crio-6ccf73c79a26f09120774ef379892c736cbfc95a3eed4857dd949f246f64a097 WatchSource:0}: Error finding container 6ccf73c79a26f09120774ef379892c736cbfc95a3eed4857dd949f246f64a097: Status 404 returned error can't find the container with id 6ccf73c79a26f09120774ef379892c736cbfc95a3eed4857dd949f246f64a097
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-741097 -n multinode-741097
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-741097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.11s)

                                                
                                    

Test pass (285/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.93
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.2
9 TestDownloadOnly/v1.16.0/DeleteAll 0.42
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.26
12 TestDownloadOnly/v1.28.4/json-events 9.59
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.22
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 9.94
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
36 TestAddons/Setup 164.12
38 TestAddons/parallel/Registry 16.51
40 TestAddons/parallel/InspektorGadget 11.83
41 TestAddons/parallel/MetricsServer 6.38
44 TestAddons/parallel/CSI 54.72
45 TestAddons/parallel/Headlamp 11.43
46 TestAddons/parallel/CloudSpanner 6.6
47 TestAddons/parallel/LocalPath 53.32
48 TestAddons/parallel/NvidiaDevicePlugin 5.54
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.31
54 TestCertOptions 36.23
55 TestCertExpiration 242.32
57 TestForceSystemdFlag 32.28
58 TestForceSystemdEnv 45.27
64 TestErrorSpam/setup 30.43
65 TestErrorSpam/start 0.85
66 TestErrorSpam/status 1.1
67 TestErrorSpam/pause 1.81
68 TestErrorSpam/unpause 1.98
69 TestErrorSpam/stop 1.46
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 75.66
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 33.6
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.78
81 TestFunctional/serial/CacheCmd/cache/add_local 1.08
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.08
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
89 TestFunctional/serial/ExtraConfig 33.63
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.78
92 TestFunctional/serial/LogsFileCmd 1.8
93 TestFunctional/serial/InvalidService 4.55
95 TestFunctional/parallel/ConfigCmd 0.62
96 TestFunctional/parallel/DashboardCmd 14.78
97 TestFunctional/parallel/DryRun 0.66
98 TestFunctional/parallel/InternationalLanguage 0.25
99 TestFunctional/parallel/StatusCmd 1.13
103 TestFunctional/parallel/ServiceCmdConnect 11.76
104 TestFunctional/parallel/AddonsCmd 0.21
105 TestFunctional/parallel/PersistentVolumeClaim 26.74
107 TestFunctional/parallel/SSHCmd 0.82
108 TestFunctional/parallel/CpCmd 2.13
110 TestFunctional/parallel/FileSync 0.35
111 TestFunctional/parallel/CertSync 2.37
115 TestFunctional/parallel/NodeLabels 0.12
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.86
119 TestFunctional/parallel/License 0.34
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.32
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.16
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
133 TestFunctional/parallel/ProfileCmd/profile_list 0.44
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
135 TestFunctional/parallel/MountCmd/any-port 8.57
136 TestFunctional/parallel/ServiceCmd/List 0.65
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
139 TestFunctional/parallel/ServiceCmd/Format 0.54
140 TestFunctional/parallel/ServiceCmd/URL 0.43
141 TestFunctional/parallel/MountCmd/specific-port 2.27
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.3
143 TestFunctional/parallel/Version/short 0.08
144 TestFunctional/parallel/Version/components 1.14
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.87
150 TestFunctional/parallel/ImageCommands/Setup 1.7
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.01
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.53
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.26
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.91
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.3
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.95
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 83.68
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.46
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.67
174 TestJSONOutput/start/Command 51.06
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.79
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.73
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.93
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.26
199 TestKicCustomNetwork/create_custom_network 44.62
200 TestKicCustomNetwork/use_default_bridge_network 32.04
201 TestKicExistingNetwork 34.1
202 TestKicCustomSubnet 34.4
203 TestKicStaticIP 34.56
204 TestMainNoArgs 0.07
205 TestMinikubeProfile 67.91
208 TestMountStart/serial/StartWithMountFirst 9.14
209 TestMountStart/serial/VerifyMountFirst 0.29
210 TestMountStart/serial/StartWithMountSecond 6.64
211 TestMountStart/serial/VerifyMountSecond 0.29
212 TestMountStart/serial/DeleteFirst 1.64
213 TestMountStart/serial/VerifyMountPostDelete 0.29
214 TestMountStart/serial/Stop 1.22
215 TestMountStart/serial/RestartStopped 7.87
216 TestMountStart/serial/VerifyMountPostStop 0.28
219 TestMultiNode/serial/FreshStart2Nodes 122.39
220 TestMultiNode/serial/DeployApp2Nodes 5.11
222 TestMultiNode/serial/AddNode 48.04
223 TestMultiNode/serial/MultiNodeLabels 0.1
224 TestMultiNode/serial/ProfileList 0.35
225 TestMultiNode/serial/CopyFile 10.94
226 TestMultiNode/serial/StopNode 2.35
227 TestMultiNode/serial/StartAfterStop 11.93
228 TestMultiNode/serial/RestartKeepsNodes 122.23
229 TestMultiNode/serial/DeleteNode 5.03
230 TestMultiNode/serial/StopMultiNode 23.95
231 TestMultiNode/serial/RestartMultiNode 83.44
232 TestMultiNode/serial/ValidateNameConflict 37
237 TestPreload 164.46
239 TestScheduledStopUnix 106.94
242 TestInsufficientStorage 13.53
243 TestRunningBinaryUpgrade 107.8
245 TestKubernetesUpgrade 408.3
246 TestMissingContainerUpgrade 149.97
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
249 TestNoKubernetes/serial/StartWithK8s 43.22
250 TestNoKubernetes/serial/StartWithStopK8s 28.34
251 TestNoKubernetes/serial/Start 6.72
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
253 TestNoKubernetes/serial/ProfileList 4.38
254 TestNoKubernetes/serial/Stop 1.23
255 TestNoKubernetes/serial/StartNoArgs 6.87
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
257 TestStoppedBinaryUpgrade/Setup 1.39
258 TestStoppedBinaryUpgrade/Upgrade 70.8
259 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
268 TestPause/serial/Start 77.27
269 TestPause/serial/SecondStartNoReconfiguration 35.05
270 TestPause/serial/Pause 1.27
271 TestPause/serial/VerifyStatus 0.53
272 TestPause/serial/Unpause 1.12
273 TestPause/serial/PauseAgain 1.44
274 TestPause/serial/DeletePaused 3.41
275 TestPause/serial/VerifyDeletedResources 0.49
283 TestNetworkPlugins/group/false 6.37
288 TestStartStop/group/old-k8s-version/serial/FirstStart 119.86
289 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
290 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
291 TestStartStop/group/old-k8s-version/serial/Stop 12.1
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
293 TestStartStop/group/old-k8s-version/serial/SecondStart 458.47
295 TestStartStop/group/no-preload/serial/FirstStart 64.57
296 TestStartStop/group/no-preload/serial/DeployApp 9.35
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
298 TestStartStop/group/no-preload/serial/Stop 11.99
299 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
300 TestStartStop/group/no-preload/serial/SecondStart 346.54
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
304 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
305 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
306 TestStartStop/group/old-k8s-version/serial/Pause 3.97
307 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
308 TestStartStop/group/no-preload/serial/Pause 4.92
310 TestStartStop/group/embed-certs/serial/FirstStart 84.91
312 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.82
313 TestStartStop/group/embed-certs/serial/DeployApp 9.36
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.33
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
316 TestStartStop/group/embed-certs/serial/Stop 12
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.99
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
320 TestStartStop/group/embed-certs/serial/SecondStart 626.89
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 355.91
323 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
325 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
326 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.43
328 TestStartStop/group/newest-cni/serial/FirstStart 43.79
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
331 TestStartStop/group/newest-cni/serial/Stop 1.25
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
333 TestStartStop/group/newest-cni/serial/SecondStart 31.29
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
337 TestStartStop/group/newest-cni/serial/Pause 3.18
338 TestNetworkPlugins/group/auto/Start 77.55
339 TestNetworkPlugins/group/auto/KubeletFlags 0.33
340 TestNetworkPlugins/group/auto/NetCatPod 10.26
341 TestNetworkPlugins/group/auto/DNS 0.21
342 TestNetworkPlugins/group/auto/Localhost 0.16
343 TestNetworkPlugins/group/auto/HairPin 0.18
344 TestNetworkPlugins/group/kindnet/Start 79.96
345 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
347 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
348 TestStartStop/group/embed-certs/serial/Pause 3.37
349 TestNetworkPlugins/group/calico/Start 79.59
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
352 TestNetworkPlugins/group/kindnet/NetCatPod 12.34
353 TestNetworkPlugins/group/kindnet/DNS 0.22
354 TestNetworkPlugins/group/kindnet/Localhost 0.21
355 TestNetworkPlugins/group/kindnet/HairPin 0.16
356 TestNetworkPlugins/group/custom-flannel/Start 74.71
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.5
359 TestNetworkPlugins/group/calico/NetCatPod 13.34
360 TestNetworkPlugins/group/calico/DNS 0.21
361 TestNetworkPlugins/group/calico/Localhost 0.19
362 TestNetworkPlugins/group/calico/HairPin 0.2
363 TestNetworkPlugins/group/enable-default-cni/Start 91.16
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.49
366 TestNetworkPlugins/group/custom-flannel/DNS 0.25
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
369 TestNetworkPlugins/group/flannel/Start 68.66
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.25
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
375 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
377 TestNetworkPlugins/group/flannel/NetCatPod 12.32
378 TestNetworkPlugins/group/bridge/Start 87.1
379 TestNetworkPlugins/group/flannel/DNS 0.27
380 TestNetworkPlugins/group/flannel/Localhost 0.29
381 TestNetworkPlugins/group/flannel/HairPin 0.31
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
383 TestNetworkPlugins/group/bridge/NetCatPod 10.25
384 TestNetworkPlugins/group/bridge/DNS 0.18
385 TestNetworkPlugins/group/bridge/Localhost 0.15
386 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (10.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-611919 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-611919 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.929983505s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-611919
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-611919: exit status 85 (201.930262ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-611919 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |          |
	|         | -p download-only-611919        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:20:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:20:00.602479  724626 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:20:00.602686  724626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:20:00.602710  724626 out.go:309] Setting ErrFile to fd 2...
	I0116 03:20:00.602727  724626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:20:00.603022  724626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	W0116 03:20:00.603216  724626 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17967-719286/.minikube/config/config.json: open /home/jenkins/minikube-integration/17967-719286/.minikube/config/config.json: no such file or directory
	I0116 03:20:00.603667  724626 out.go:303] Setting JSON to true
	I0116 03:20:00.604577  724626 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10950,"bootTime":1705364251,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0116 03:20:00.604729  724626 start.go:138] virtualization:  
	I0116 03:20:00.608123  724626 out.go:97] [download-only-611919] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:20:00.610295  724626 out.go:169] MINIKUBE_LOCATION=17967
	W0116 03:20:00.608384  724626 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball: no such file or directory
	I0116 03:20:00.608459  724626 notify.go:220] Checking for updates...
	I0116 03:20:00.612483  724626 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:20:00.614551  724626 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:20:00.616596  724626 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	I0116 03:20:00.618702  724626 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0116 03:20:00.622588  724626 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 03:20:00.622824  724626 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:20:00.645855  724626 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:20:00.645987  724626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:20:00.729811  724626 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-16 03:20:00.719262291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:20:00.729922  724626 docker.go:295] overlay module found
	I0116 03:20:00.731833  724626 out.go:97] Using the docker driver based on user configuration
	I0116 03:20:00.731867  724626 start.go:298] selected driver: docker
	I0116 03:20:00.731874  724626 start.go:902] validating driver "docker" against <nil>
	I0116 03:20:00.731974  724626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:20:00.801242  724626 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2024-01-16 03:20:00.790977789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:20:00.801392  724626 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:20:00.801719  724626 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0116 03:20:00.801895  724626 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 03:20:00.803981  724626 out.go:169] Using Docker driver with root privileges
	I0116 03:20:00.805933  724626 cni.go:84] Creating CNI manager for ""
	I0116 03:20:00.805950  724626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 03:20:00.805961  724626 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 03:20:00.805974  724626 start_flags.go:321] config:
	{Name:download-only-611919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-611919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:20:00.807973  724626 out.go:97] Starting control plane node download-only-611919 in cluster download-only-611919
	I0116 03:20:00.807991  724626 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 03:20:00.810044  724626 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 03:20:00.810067  724626 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:20:00.810163  724626 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 03:20:00.826716  724626 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 03:20:00.826914  724626 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 03:20:00.827015  724626 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 03:20:00.874636  724626 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0116 03:20:00.874661  724626 cache.go:56] Caching tarball of preloaded images
	I0116 03:20:00.874826  724626 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:20:00.877215  724626 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 03:20:00.877234  724626 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0116 03:20:00.996391  724626 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0116 03:20:05.377553  724626 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 03:20:09.571064  724626 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0116 03:20:09.571158  724626 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0116 03:20:10.571720  724626 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0116 03:20:10.572100  724626 profile.go:148] Saving config to /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/download-only-611919/config.json ...
	I0116 03:20:10.572132  724626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/download-only-611919/config.json: {Name:mk01aa410b3f3ff56907588c8e4bb69d3cc467b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:20:10.572322  724626 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 03:20:10.573109  724626 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17967-719286/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-611919"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-611919
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (9.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-389954 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-389954 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.586966955s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (9.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-389954
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-389954: exit status 85 (88.47221ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-611919 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |                     |
	|         | -p download-only-611919        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| delete  | -p download-only-611919        | download-only-611919 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| start   | -o=json --download-only        | download-only-389954 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |                     |
	|         | -p download-only-389954        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:20:12
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:20:12.422434  724789 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:20:12.422634  724789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:20:12.422658  724789 out.go:309] Setting ErrFile to fd 2...
	I0116 03:20:12.422676  724789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:20:12.422954  724789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 03:20:12.423424  724789 out.go:303] Setting JSON to true
	I0116 03:20:12.424385  724789 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10962,"bootTime":1705364251,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0116 03:20:12.424490  724789 start.go:138] virtualization:  
	I0116 03:20:12.453743  724789 out.go:97] [download-only-389954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:20:12.454250  724789 notify.go:220] Checking for updates...
	I0116 03:20:12.483903  724789 out.go:169] MINIKUBE_LOCATION=17967
	I0116 03:20:12.516080  724789 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:20:12.547135  724789 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:20:12.580522  724789 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	I0116 03:20:12.613833  724789 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0116 03:20:12.677350  724789 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 03:20:12.677654  724789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:20:12.704710  724789 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:20:12.704811  724789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:20:12.769102  724789 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-16 03:20:12.759878004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:20:12.769199  724789 docker.go:295] overlay module found
	I0116 03:20:12.780678  724789 out.go:97] Using the docker driver based on user configuration
	I0116 03:20:12.780708  724789 start.go:298] selected driver: docker
	I0116 03:20:12.780716  724789 start.go:902] validating driver "docker" against <nil>
	I0116 03:20:12.780814  724789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:20:12.844708  724789 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-16 03:20:12.836008265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:20:12.844863  724789 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:20:12.845141  724789 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0116 03:20:12.845311  724789 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 03:20:12.847840  724789 out.go:169] Using Docker driver with root privileges
	I0116 03:20:12.849765  724789 cni.go:84] Creating CNI manager for ""
	I0116 03:20:12.849785  724789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 03:20:12.849798  724789 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 03:20:12.849814  724789 start_flags.go:321] config:
	{Name:download-only-389954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-389954 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:20:12.852033  724789 out.go:97] Starting control plane node download-only-389954 in cluster download-only-389954
	I0116 03:20:12.852140  724789 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 03:20:12.854424  724789 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 03:20:12.854446  724789 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:20:12.854598  724789 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 03:20:12.870778  724789 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 03:20:12.870917  724789 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 03:20:12.870941  724789 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 03:20:12.870948  724789 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 03:20:12.870956  724789 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 03:20:12.917108  724789 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0116 03:20:12.917127  724789 cache.go:56] Caching tarball of preloaded images
	I0116 03:20:12.918061  724789 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 03:20:12.920732  724789 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0116 03:20:12.920750  724789 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0116 03:20:13.034076  724789 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-389954"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-389954
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (9.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-860274 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-860274 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.936473714s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (9.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-860274
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-860274: exit status 85 (88.578598ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-611919 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |                     |
	|         | -p download-only-611919           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| delete  | -p download-only-611919           | download-only-611919 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| start   | -o=json --download-only           | download-only-389954 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |                     |
	|         | -p download-only-389954           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| delete  | -p download-only-389954           | download-only-389954 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC | 16 Jan 24 03:20 UTC |
	| start   | -o=json --download-only           | download-only-860274 | jenkins | v1.32.0 | 16 Jan 24 03:20 UTC |                     |
	|         | -p download-only-860274           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:20:22
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:20:22.467677  724954 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:20:22.467817  724954 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:20:22.467828  724954 out.go:309] Setting ErrFile to fd 2...
	I0116 03:20:22.467834  724954 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:20:22.468101  724954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 03:20:22.468520  724954 out.go:303] Setting JSON to true
	I0116 03:20:22.469343  724954 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10972,"bootTime":1705364251,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0116 03:20:22.469416  724954 start.go:138] virtualization:  
	I0116 03:20:22.471942  724954 out.go:97] [download-only-860274] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:20:22.474036  724954 out.go:169] MINIKUBE_LOCATION=17967
	I0116 03:20:22.472271  724954 notify.go:220] Checking for updates...
	I0116 03:20:22.476186  724954 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:20:22.478136  724954 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:20:22.480106  724954 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	I0116 03:20:22.482084  724954 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0116 03:20:22.486267  724954 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 03:20:22.486547  724954 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:20:22.508985  724954 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:20:22.509100  724954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:20:22.600556  724954 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-16 03:20:22.591691403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:20:22.600656  724954 docker.go:295] overlay module found
	I0116 03:20:22.602972  724954 out.go:97] Using the docker driver based on user configuration
	I0116 03:20:22.602996  724954 start.go:298] selected driver: docker
	I0116 03:20:22.603002  724954 start.go:902] validating driver "docker" against <nil>
	I0116 03:20:22.603102  724954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:20:22.664869  724954 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2024-01-16 03:20:22.655786458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:20:22.665022  724954 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:20:22.665286  724954 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0116 03:20:22.665463  724954 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 03:20:22.668346  724954 out.go:169] Using Docker driver with root privileges
	I0116 03:20:22.670331  724954 cni.go:84] Creating CNI manager for ""
	I0116 03:20:22.670367  724954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 03:20:22.670380  724954 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 03:20:22.670396  724954 start_flags.go:321] config:
	{Name:download-only-860274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-860274 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:20:22.672566  724954 out.go:97] Starting control plane node download-only-860274 in cluster download-only-860274
	I0116 03:20:22.672585  724954 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 03:20:22.674626  724954 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 03:20:22.674647  724954 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:20:22.674800  724954 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 03:20:22.691074  724954 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 03:20:22.691226  724954 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 03:20:22.691249  724954 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 03:20:22.691262  724954 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 03:20:22.691270  724954 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 03:20:22.739849  724954 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0116 03:20:22.739876  724954 cache.go:56] Caching tarball of preloaded images
	I0116 03:20:22.740551  724954 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 03:20:22.742762  724954 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0116 03:20:22.742781  724954 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0116 03:20:22.840916  724954 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:9d8119c6fd5c58f71de57a6fdbe27bf3 -> /home/jenkins/minikube-integration/17967-719286/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-860274"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-860274
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-260234 --alsologtostderr --binary-mirror http://127.0.0.1:36735 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-260234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-260234
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-005301
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-005301: exit status 85 (93.908068ms)

                                                
                                                
-- stdout --
	* Profile "addons-005301" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-005301"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-005301
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-005301: exit status 85 (96.347382ms)

                                                
                                                
-- stdout --
	* Profile "addons-005301" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-005301"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (164.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-005301 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-005301 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m44.116458476s)
--- PASS: TestAddons/Setup (164.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 45.951848ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jwfz4" [bc959312-0380-4457-89d3-7a3db8b1e928] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005019194s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gwljk" [2edcc070-e7c4-41ce-92db-9bbb2abd69e5] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004849824s
addons_test.go:340: (dbg) Run:  kubectl --context addons-005301 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-005301 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-005301 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.354779751s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 ip
2024/01/16 03:23:34 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.51s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bcksv" [4e12d4a1-1b83-4d3a-8ddb-2f8708ee4b58] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003464955s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-005301
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-005301: (5.820403976s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.87107ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-vmqb5" [9250ab40-a698-47db-824c-ce37ce1c5daf] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005950107s
addons_test.go:415: (dbg) Run:  kubectl --context addons-005301 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-arm64 -p addons-005301 addons disable metrics-server --alsologtostderr -v=1: (1.179050958s)
--- PASS: TestAddons/parallel/MetricsServer (6.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 46.143612ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-005301 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-005301 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [da4f5dfb-2091-480e-aa76-7b681d45ffa6] Pending
helpers_test.go:344: "task-pv-pod" [da4f5dfb-2091-480e-aa76-7b681d45ffa6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [da4f5dfb-2091-480e-aa76-7b681d45ffa6] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003952944s
addons_test.go:584: (dbg) Run:  kubectl --context addons-005301 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-005301 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-005301 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-005301 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-005301 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-005301 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-005301 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f045d8ea-9e8e-45f1-be2c-bfb075ddba83] Pending
helpers_test.go:344: "task-pv-pod-restore" [f045d8ea-9e8e-45f1-be2c-bfb075ddba83] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f045d8ea-9e8e-45f1-be2c-bfb075ddba83] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.003480935s
addons_test.go:626: (dbg) Run:  kubectl --context addons-005301 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-005301 delete pod task-pv-pod-restore: (1.213313743s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-005301 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-005301 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-005301 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.807089587s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-005301 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-005301 --alsologtostderr -v=1: (1.426638326s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-kqmk5" [1d56d8b5-53bb-40f4-a8be-8b4b999f08af] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-kqmk5" [1d56d8b5-53bb-40f4-a8be-8b4b999f08af] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-kqmk5" [1d56d8b5-53bb-40f4-a8be-8b4b999f08af] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003343677s
--- PASS: TestAddons/parallel/Headlamp (11.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-q6qt6" [82d52433-c680-4639-97a6-59ee16d63a41] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003292015s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-005301
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-005301 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-005301 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-005301 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e8170db8-b870-45be-b2d9-bd5d7cda4996] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e8170db8-b870-45be-b2d9-bd5d7cda4996] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e8170db8-b870-45be-b2d9-bd5d7cda4996] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005693396s
addons_test.go:891: (dbg) Run:  kubectl --context addons-005301 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 ssh "cat /opt/local-path-provisioner/pvc-f4595d61-448d-4d40-8aad-005cd4aa97ec_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-005301 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-005301 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-005301 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-005301 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.161015495s)
--- PASS: TestAddons/parallel/LocalPath (53.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8hr29" [5306d06e-94e9-4a23-a82d-32ab86b63e82] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004025017s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-005301
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-5t5qt" [27f1ae16-c448-4485-a337-5f935c95b8fb] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003361714s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-005301 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-005301 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-005301
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-005301: (11.984147658s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-005301
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-005301
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-005301
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (36.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-291752 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-291752 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.43746708s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-291752 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-291752 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-291752 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-291752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-291752
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-291752: (2.079013536s)
--- PASS: TestCertOptions (36.23s)

                                                
                                    
x
+
TestCertExpiration (242.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-278276 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0116 04:03:18.771967  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 04:03:23.275360  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-278276 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.400227531s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-278276 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-278276 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (20.72929644s)
helpers_test.go:175: Cleaning up "cert-expiration-278276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-278276
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-278276: (2.190824231s)
--- PASS: TestCertExpiration (242.32s)

                                                
                                    
x
+
TestForceSystemdFlag (32.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-113818 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-113818 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.380295667s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-113818 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-113818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-113818
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-113818: (2.565667872s)
--- PASS: TestForceSystemdFlag (32.28s)

                                                
                                    
x
+
TestForceSystemdEnv (45.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-802726 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-802726 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.8137788s)
helpers_test.go:175: Cleaning up "force-systemd-env-802726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-802726
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-802726: (2.451484337s)
--- PASS: TestForceSystemdEnv (45.27s)

                                                
                                    
x
+
TestErrorSpam/setup (30.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-387485 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-387485 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-387485 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-387485 --driver=docker  --container-runtime=crio: (30.430264076s)
--- PASS: TestErrorSpam/setup (30.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 unpause
--- PASS: TestErrorSpam/unpause (1.98s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 stop: (1.215421622s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-387485 --log_dir /tmp/nospam-387485 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17967-719286/.minikube/files/etc/test/nested/copy/724621/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-983329 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0116 03:28:18.774587  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:18.781640  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:18.791843  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:18.812088  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:18.852306  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:18.932586  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:19.092929  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:19.413199  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:20.054042  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:21.334520  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:23.896423  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:29.016623  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:39.257702  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:28:59.737908  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-983329 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.664129314s)
--- PASS: TestFunctional/serial/StartWithProxy (75.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-983329 --alsologtostderr -v=8
E0116 03:29:40.699022  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-983329 --alsologtostderr -v=8: (33.597390656s)
functional_test.go:659: soft start took 33.601136433s for "functional-983329" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-983329 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 cache add registry.k8s.io/pause:3.1: (1.245098605s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 cache add registry.k8s.io/pause:3.3: (1.287869124s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 cache add registry.k8s.io/pause:latest: (1.24636355s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-983329 /tmp/TestFunctionalserialCacheCmdcacheadd_local1414365298/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 cache add minikube-local-cache-test:functional-983329
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 cache delete minikube-local-cache-test:functional-983329
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-983329
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-983329 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (343.591002ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 cache reload: (1.039837142s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 kubectl -- --context functional-983329 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-983329 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-983329 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-983329 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.631737981s)
functional_test.go:757: restart took 33.631827795s for "functional-983329" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-983329 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 logs: (1.780517836s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 logs --file /tmp/TestFunctionalserialLogsFileCmd1855118623/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 logs --file /tmp/TestFunctionalserialLogsFileCmd1855118623/001/logs.txt: (1.800570236s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.80s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-983329 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-983329
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-983329: exit status 115 (476.868403ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31298 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-983329 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-983329 config get cpus: exit status 14 (123.451916ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-983329 config get cpus: exit status 14 (106.312524ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-983329 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-983329 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 748591: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-983329 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-983329 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (254.589921ms)

                                                
                                                
-- stdout --
	* [functional-983329] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:31:11.243189  748311 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:31:11.243388  748311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:31:11.243410  748311 out.go:309] Setting ErrFile to fd 2...
	I0116 03:31:11.243430  748311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:31:11.246505  748311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 03:31:11.246967  748311 out.go:303] Setting JSON to false
	I0116 03:31:11.247976  748311 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11621,"bootTime":1705364251,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0116 03:31:11.248099  748311 start.go:138] virtualization:  
	I0116 03:31:11.252137  748311 out.go:177] * [functional-983329] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 03:31:11.254547  748311 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:31:11.256479  748311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:31:11.254696  748311 notify.go:220] Checking for updates...
	I0116 03:31:11.258979  748311 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:31:11.261244  748311 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	I0116 03:31:11.263198  748311 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 03:31:11.265214  748311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:31:11.267570  748311 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:31:11.268295  748311 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:31:11.298195  748311 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:31:11.298393  748311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:31:11.393337  748311 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-16 03:31:11.383774683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:31:11.393436  748311 docker.go:295] overlay module found
	I0116 03:31:11.397420  748311 out.go:177] * Using the docker driver based on existing profile
	I0116 03:31:11.399179  748311 start.go:298] selected driver: docker
	I0116 03:31:11.399195  748311 start.go:902] validating driver "docker" against &{Name:functional-983329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-983329 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:31:11.399372  748311 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:31:11.402205  748311 out.go:177] 
	W0116 03:31:11.404758  748311 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0116 03:31:11.411251  748311 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-983329 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-983329 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-983329 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (250.951698ms)

                                                
                                                
-- stdout --
	* [functional-983329] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:31:10.983626  748272 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:31:10.983866  748272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:31:10.983895  748272 out.go:309] Setting ErrFile to fd 2...
	I0116 03:31:10.983915  748272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:31:10.984831  748272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 03:31:10.985257  748272 out.go:303] Setting JSON to false
	I0116 03:31:10.986258  748272 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11620,"bootTime":1705364251,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0116 03:31:10.986355  748272 start.go:138] virtualization:  
	I0116 03:31:10.990286  748272 out.go:177] * [functional-983329] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0116 03:31:10.993111  748272 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:31:10.995387  748272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:31:10.993208  748272 notify.go:220] Checking for updates...
	I0116 03:31:10.997569  748272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 03:31:10.999881  748272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	I0116 03:31:11.002381  748272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 03:31:11.004360  748272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:31:11.007252  748272 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:31:11.007790  748272 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:31:11.046072  748272 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 03:31:11.046199  748272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:31:11.143550  748272 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-16 03:31:11.133686049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:31:11.143659  748272 docker.go:295] overlay module found
	I0116 03:31:11.147240  748272 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0116 03:31:11.149445  748272 start.go:298] selected driver: docker
	I0116 03:31:11.149462  748272 start.go:902] validating driver "docker" against &{Name:functional-983329 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-983329 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:31:11.149560  748272 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:31:11.151972  748272 out.go:177] 
	W0116 03:31:11.154058  748272 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0116 03:31:11.156147  748272 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-983329 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-983329 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-str8p" [a31a01e0-c12b-495b-9dc9-b72c4f18d77d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-str8p" [a31a01e0-c12b-495b-9dc9-b72c4f18d77d] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004087879s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32028
functional_test.go:1674: http://192.168.49.2:32028: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-str8p

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32028
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [55166090-fff2-449f-8bdd-f1a258e681ca] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003915038s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-983329 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-983329 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-983329 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-983329 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b48de9b7-acfa-431f-8234-4d707cbf9a2c] Pending
helpers_test.go:344: "sp-pod" [b48de9b7-acfa-431f-8234-4d707cbf9a2c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b48de9b7-acfa-431f-8234-4d707cbf9a2c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004029465s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-983329 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-983329 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-983329 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6983efc4-b037-42f3-a0f3-b7f9bc8a442b] Pending
helpers_test.go:344: "sp-pod" [6983efc4-b037-42f3-a0f3-b7f9bc8a442b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004250436s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-983329 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh -n functional-983329 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 cp functional-983329:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3654553752/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh -n functional-983329 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh -n functional-983329 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/724621/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo cat /etc/test/nested/copy/724621/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/724621.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo cat /etc/ssl/certs/724621.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/724621.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo cat /usr/share/ca-certificates/724621.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/7246212.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo cat /etc/ssl/certs/7246212.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/7246212.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo cat /usr/share/ca-certificates/7246212.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-983329 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-983329 ssh "sudo systemctl is-active docker": exit status 1 (396.588928ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-983329 ssh "sudo systemctl is-active containerd": exit status 1 (465.278131ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-983329 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-983329 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-983329 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 746405: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-983329 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-983329 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-983329 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [10400b63-e4c8-4514-beb4-d4349c76a04c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [10400b63-e4c8-4514-beb4-d4349c76a04c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003651684s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-983329 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.116.224 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-983329 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-983329 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-983329 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-hztgl" [35ec6922-93b4-44f6-8435-04a1f80829c5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-hztgl" [35ec6922-93b4-44f6-8435-04a1f80829c5] Running
E0116 03:31:02.620024  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004043056s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "368.544384ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "72.602302ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "341.221248ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "70.462691ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdany-port704105859/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705375866804511725" to /tmp/TestFunctionalparallelMountCmdany-port704105859/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705375866804511725" to /tmp/TestFunctionalparallelMountCmdany-port704105859/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705375866804511725" to /tmp/TestFunctionalparallelMountCmdany-port704105859/001/test-1705375866804511725
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (437.105111ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 16 03:31 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 16 03:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 16 03:31 test-1705375866804511725
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh cat /mount-9p/test-1705375866804511725
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-983329 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b14b696b-899e-419f-bc51-3766f5bf05c8] Pending
helpers_test.go:344: "busybox-mount" [b14b696b-899e-419f-bc51-3766f5bf05c8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b14b696b-899e-419f-bc51-3766f5bf05c8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b14b696b-899e-419f-bc51-3766f5bf05c8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003918944s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-983329 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdany-port704105859/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 service list -o json
functional_test.go:1493: Took "611.341384ms" to run "out/minikube-linux-arm64 -p functional-983329 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31352
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31352
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdspecific-port3688643788/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (661.33956ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdspecific-port3688643788/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-983329 ssh "sudo umount -f /mount-9p": exit status 1 (382.957206ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-983329 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdspecific-port3688643788/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608300021/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608300021/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608300021/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T" /mount1: (1.362173133s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-983329 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608300021/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608300021/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-983329 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1608300021/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 version -o=json --components: (1.13590065s)
--- PASS: TestFunctional/parallel/Version/components (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-983329 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-983329
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-983329 image ls --format short --alsologtostderr:
I0116 03:31:41.440532  751141 out.go:296] Setting OutFile to fd 1 ...
I0116 03:31:41.440761  751141 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:41.440795  751141 out.go:309] Setting ErrFile to fd 2...
I0116 03:31:41.440815  751141 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:41.441098  751141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
I0116 03:31:41.441771  751141 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:41.441979  751141 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:41.442524  751141 cli_runner.go:164] Run: docker container inspect functional-983329 --format={{.State.Status}}
I0116 03:31:41.466441  751141 ssh_runner.go:195] Run: systemctl --version
I0116 03:31:41.466500  751141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-983329
I0116 03:31:41.492182  751141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/functional-983329/id_rsa Username:docker}
I0116 03:31:41.590504  751141 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-983329 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | latest             | 6c7be49d2a11c | 196MB  |
| gcr.io/google-containers/addon-resizer  | functional-983329  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | alpine             | 74077e780ec71 | 45.3MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-983329 image ls --format table --alsologtostderr:
I0116 03:31:42.063416  751273 out.go:296] Setting OutFile to fd 1 ...
I0116 03:31:42.063586  751273 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:42.063596  751273 out.go:309] Setting ErrFile to fd 2...
I0116 03:31:42.063602  751273 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:42.063882  751273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
I0116 03:31:42.064550  751273 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:42.064714  751273 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:42.065306  751273 cli_runner.go:164] Run: docker container inspect functional-983329 --format={{.State.Status}}
I0116 03:31:42.093149  751273 ssh_runner.go:195] Run: systemctl --version
I0116 03:31:42.093204  751273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-983329
I0116 03:31:42.138800  751273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/functional-983329/id_rsa Username:docker}
I0116 03:31:42.238848  751273 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-983329 image ls --format json --alsologtostderr:
[{"id":"74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448","repoDigests":["docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb","docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45330189"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ba04bb24b957
53201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181
bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","rep
oDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-983329"],"size":"34114467"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["regi
stry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917
dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"6c7be49d2a11cfab9a87362ad27d447b45931e43dfa6919a8e1398ec09c1e353","repoDigests":["docker.io/library/nginx@sha25
6:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac","docker.io/library/nginx@sha256:523c417937604bc107d799e5cad1ae2ca8a9fd46306634fa2c547dc6220ec17c"],"repoTags":["docker.io/library/nginx:latest"],"size":"196113558"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-983329 image ls --format json --alsologtostderr:
I0116 03:31:41.767671  751202 out.go:296] Setting OutFile to fd 1 ...
I0116 03:31:41.767873  751202 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:41.767909  751202 out.go:309] Setting ErrFile to fd 2...
I0116 03:31:41.767928  751202 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:41.768243  751202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
I0116 03:31:41.768973  751202 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:41.769173  751202 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:41.769749  751202 cli_runner.go:164] Run: docker container inspect functional-983329 --format={{.State.Status}}
I0116 03:31:41.798205  751202 ssh_runner.go:195] Run: systemctl --version
I0116 03:31:41.798259  751202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-983329
I0116 03:31:41.820193  751202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/functional-983329/id_rsa Username:docker}
I0116 03:31:41.921755  751202 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-983329 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "45330189"
- id: 6c7be49d2a11cfab9a87362ad27d447b45931e43dfa6919a8e1398ec09c1e353
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
- docker.io/library/nginx@sha256:523c417937604bc107d799e5cad1ae2ca8a9fd46306634fa2c547dc6220ec17c
repoTags:
- docker.io/library/nginx:latest
size: "196113558"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-983329
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-983329 image ls --format yaml --alsologtostderr:
I0116 03:31:41.435552  751140 out.go:296] Setting OutFile to fd 1 ...
I0116 03:31:41.435747  751140 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:41.435759  751140 out.go:309] Setting ErrFile to fd 2...
I0116 03:31:41.435766  751140 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:41.436032  751140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
I0116 03:31:41.436759  751140 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:41.436953  751140 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:41.437576  751140 cli_runner.go:164] Run: docker container inspect functional-983329 --format={{.State.Status}}
I0116 03:31:41.462910  751140 ssh_runner.go:195] Run: systemctl --version
I0116 03:31:41.463274  751140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-983329
I0116 03:31:41.486738  751140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/functional-983329/id_rsa Username:docker}
I0116 03:31:41.585825  751140 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-983329 ssh pgrep buildkitd: exit status 1 (393.89535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image build -t localhost/my-image:functional-983329 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 image build -t localhost/my-image:functional-983329 testdata/build --alsologtostderr: (2.220120088s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-983329 image build -t localhost/my-image:functional-983329 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 53cbb8daef5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-983329
--> 3438dc491ea
Successfully tagged localhost/my-image:functional-983329
3438dc491eaeb10d11372a9f24dc0b4216d9fa415f268cbd29f03ca6cdef06e4
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-983329 image build -t localhost/my-image:functional-983329 testdata/build --alsologtostderr:
I0116 03:31:42.140580  751280 out.go:296] Setting OutFile to fd 1 ...
I0116 03:31:42.141362  751280 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:42.141400  751280 out.go:309] Setting ErrFile to fd 2...
I0116 03:31:42.141422  751280 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 03:31:42.141762  751280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
I0116 03:31:42.142600  751280 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:42.143434  751280 config.go:182] Loaded profile config "functional-983329": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 03:31:42.144136  751280 cli_runner.go:164] Run: docker container inspect functional-983329 --format={{.State.Status}}
I0116 03:31:42.168980  751280 ssh_runner.go:195] Run: systemctl --version
I0116 03:31:42.169035  751280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-983329
I0116 03:31:42.191843  751280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33492 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/functional-983329/id_rsa Username:docker}
I0116 03:31:42.294267  751280 build_images.go:151] Building image from path: /tmp/build.2731749492.tar
I0116 03:31:42.294339  751280 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 03:31:42.308293  751280 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2731749492.tar
I0116 03:31:42.312679  751280 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2731749492.tar: stat -c "%s %y" /var/lib/minikube/build/build.2731749492.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2731749492.tar': No such file or directory
I0116 03:31:42.312714  751280 ssh_runner.go:362] scp /tmp/build.2731749492.tar --> /var/lib/minikube/build/build.2731749492.tar (3072 bytes)
I0116 03:31:42.342620  751280 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2731749492
I0116 03:31:42.353839  751280 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2731749492 -xf /var/lib/minikube/build/build.2731749492.tar
I0116 03:31:42.364605  751280 crio.go:297] Building image: /var/lib/minikube/build/build.2731749492
I0116 03:31:42.364675  751280 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-983329 /var/lib/minikube/build/build.2731749492 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0116 03:31:44.225469  751280 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-983329 /var/lib/minikube/build/build.2731749492 --cgroup-manager=cgroupfs: (1.860763775s)
I0116 03:31:44.225534  751280 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2731749492
I0116 03:31:44.236738  751280 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2731749492.tar
I0116 03:31:44.246905  751280 build_images.go:207] Built localhost/my-image:functional-983329 from /tmp/build.2731749492.tar
I0116 03:31:44.246934  751280 build_images.go:123] succeeded building to: functional-983329
I0116 03:31:44.246939  751280 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.682524079s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-983329
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image load --daemon gcr.io/google-containers/addon-resizer:functional-983329 --alsologtostderr
2024/01/16 03:31:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 image load --daemon gcr.io/google-containers/addon-resizer:functional-983329 --alsologtostderr: (4.715909576s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image load --daemon gcr.io/google-containers/addon-resizer:functional-983329 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 image load --daemon gcr.io/google-containers/addon-resizer:functional-983329 --alsologtostderr: (3.228218977s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.383496496s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-983329
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image load --daemon gcr.io/google-containers/addon-resizer:functional-983329 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 image load --daemon gcr.io/google-containers/addon-resizer:functional-983329 --alsologtostderr: (3.610370586s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image save gcr.io/google-containers/addon-resizer:functional-983329 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image rm gcr.io/google-containers/addon-resizer:functional-983329 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-983329 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.040911178s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-983329
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-983329 image save --daemon gcr.io/google-containers/addon-resizer:functional-983329 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-983329
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-983329
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-983329
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-983329
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (83.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-194312 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-194312 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m23.682988197s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (83.68s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-194312 addons enable ingress --alsologtostderr -v=5
E0116 03:33:18.771480  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-194312 addons enable ingress --alsologtostderr -v=5: (11.457781156s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-194312 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-842003 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0116 03:37:00.698033  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-842003 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (51.055598432s)
--- PASS: TestJSONOutput/start/Command (51.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-842003 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-842003 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-842003 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-842003 --output=json --user=testUser: (5.929528113s)
--- PASS: TestJSONOutput/stop/Command (5.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-818893 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-818893 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (100.411813ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8689f812-1ab9-4722-b588-70a85ead6714","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-818893] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8503359-5994-438e-9f6b-5c131bcb2297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17967"}}
	{"specversion":"1.0","id":"29cb5b73-cc36-4b61-ad21-cfed6970d338","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fceb5c0b-4561-4ec3-8f32-5eb718595dfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig"}}
	{"specversion":"1.0","id":"89da2bfc-641e-4606-8ac4-0cb9776b30f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube"}}
	{"specversion":"1.0","id":"649285e8-5e81-4d85-b845-8938c0d49b76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"37899123-a7fe-46d7-bd7e-e5bd51b59f93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"56a59e6d-9599-42fb-b92a-8a4dbaec15cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-818893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-818893
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-663648 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-663648 --network=: (42.579584166s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-663648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-663648
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-663648: (2.013735195s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-978001 --network=bridge
E0116 03:38:18.771130  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:38:22.618338  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:38:23.274360  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:23.279982  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:23.290205  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:23.310435  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:23.350668  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:23.430920  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:23.591289  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:23.911776  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:24.552649  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:25.832876  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:28.393074  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:38:33.514026  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-978001 --network=bridge: (29.960859405s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-978001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-978001
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-978001: (2.055192877s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.04s)

                                                
                                    
x
+
TestKicExistingNetwork (34.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-186631 --network=existing-network
E0116 03:38:43.754953  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:39:04.235163  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-186631 --network=existing-network: (31.938846904s)
helpers_test.go:175: Cleaning up "existing-network-186631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-186631
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-186631: (2.017999171s)
--- PASS: TestKicExistingNetwork (34.10s)

                                                
                                    
x
+
TestKicCustomSubnet (34.4s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-297360 --subnet=192.168.60.0/24
E0116 03:39:45.195307  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-297360 --subnet=192.168.60.0/24: (32.289774822s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-297360 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-297360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-297360
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-297360: (2.087387173s)
--- PASS: TestKicCustomSubnet (34.40s)

                                                
                                    
x
+
TestKicStaticIP (34.56s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-259582 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-259582 --static-ip=192.168.200.200: (32.281481879s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-259582 ip
helpers_test.go:175: Cleaning up "static-ip-259582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-259582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-259582: (2.103876948s)
--- PASS: TestKicStaticIP (34.56s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (67.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-903638 --driver=docker  --container-runtime=crio
E0116 03:40:38.776318  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-903638 --driver=docker  --container-runtime=crio: (32.468657771s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-906256 --driver=docker  --container-runtime=crio
E0116 03:41:06.459424  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 03:41:07.115504  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-906256 --driver=docker  --container-runtime=crio: (30.159552285s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-903638
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-906256
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-906256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-906256
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-906256: (1.989737655s)
helpers_test.go:175: Cleaning up "first-903638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-903638
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-903638: (1.970560881s)
--- PASS: TestMinikubeProfile (67.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-008484 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-008484 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.137729989s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-008484 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-010273 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-010273 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.639054536s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-010273 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-008484 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-008484 --alsologtostderr -v=5: (1.639766118s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-010273 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-010273
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-010273: (1.219204443s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-010273
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-010273: (6.865823302s)
--- PASS: TestMountStart/serial/RestartStopped (7.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-010273 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-741097 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0116 03:43:18.772225  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:43:23.275201  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 03:43:50.955725  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-741097 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m1.830423991s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-741097 -- rollout status deployment/busybox: (3.037773237s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-5xhls -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-zwvv5 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-5xhls -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-zwvv5 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-5xhls -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-741097 -- exec busybox-5bc68d56bd-zwvv5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-741097 -v 3 --alsologtostderr
E0116 03:44:41.821799  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-741097 -v 3 --alsologtostderr: (47.273404075s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.04s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-741097 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp testdata/cp-test.txt multinode-741097:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp multinode-741097:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile752462302/001/cp-test_multinode-741097.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp multinode-741097:/home/docker/cp-test.txt multinode-741097-m02:/home/docker/cp-test_multinode-741097_multinode-741097-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m02 "sudo cat /home/docker/cp-test_multinode-741097_multinode-741097-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp multinode-741097:/home/docker/cp-test.txt multinode-741097-m03:/home/docker/cp-test_multinode-741097_multinode-741097-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m03 "sudo cat /home/docker/cp-test_multinode-741097_multinode-741097-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp testdata/cp-test.txt multinode-741097-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp multinode-741097-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile752462302/001/cp-test_multinode-741097-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp multinode-741097-m02:/home/docker/cp-test.txt multinode-741097:/home/docker/cp-test_multinode-741097-m02_multinode-741097.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097 "sudo cat /home/docker/cp-test_multinode-741097-m02_multinode-741097.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp multinode-741097-m02:/home/docker/cp-test.txt multinode-741097-m03:/home/docker/cp-test_multinode-741097-m02_multinode-741097-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m03 "sudo cat /home/docker/cp-test_multinode-741097-m02_multinode-741097-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp testdata/cp-test.txt multinode-741097-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp multinode-741097-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile752462302/001/cp-test_multinode-741097-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp multinode-741097-m03:/home/docker/cp-test.txt multinode-741097:/home/docker/cp-test_multinode-741097-m03_multinode-741097.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097 "sudo cat /home/docker/cp-test_multinode-741097-m03_multinode-741097.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 cp multinode-741097-m03:/home/docker/cp-test.txt multinode-741097-m02:/home/docker/cp-test_multinode-741097-m03_multinode-741097-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 ssh -n multinode-741097-m02 "sudo cat /home/docker/cp-test_multinode-741097-m03_multinode-741097-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-741097 node stop m03: (1.239230545s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-741097 status: exit status 7 (569.971437ms)

                                                
                                                
-- stdout --
	multinode-741097
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-741097-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-741097-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-741097 status --alsologtostderr: exit status 7 (541.206607ms)

                                                
                                                
-- stdout --
	multinode-741097
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-741097-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-741097-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:45:13.732322  797521 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:45:13.732549  797521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:45:13.732580  797521 out.go:309] Setting ErrFile to fd 2...
	I0116 03:45:13.732601  797521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:45:13.732858  797521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 03:45:13.733065  797521 out.go:303] Setting JSON to false
	I0116 03:45:13.733181  797521 mustload.go:65] Loading cluster: multinode-741097
	I0116 03:45:13.733248  797521 notify.go:220] Checking for updates...
	I0116 03:45:13.734452  797521 config.go:182] Loaded profile config "multinode-741097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:45:13.734495  797521 status.go:255] checking status of multinode-741097 ...
	I0116 03:45:13.735090  797521 cli_runner.go:164] Run: docker container inspect multinode-741097 --format={{.State.Status}}
	I0116 03:45:13.754005  797521 status.go:330] multinode-741097 host status = "Running" (err=<nil>)
	I0116 03:45:13.754027  797521 host.go:66] Checking if "multinode-741097" exists ...
	I0116 03:45:13.754323  797521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-741097
	I0116 03:45:13.771502  797521 host.go:66] Checking if "multinode-741097" exists ...
	I0116 03:45:13.771835  797521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:45:13.771879  797521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097
	I0116 03:45:13.789855  797521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097/id_rsa Username:docker}
	I0116 03:45:13.886356  797521 ssh_runner.go:195] Run: systemctl --version
	I0116 03:45:13.891573  797521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:45:13.906069  797521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 03:45:13.975188  797521 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-16 03:45:13.965060374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 03:45:13.975790  797521 kubeconfig.go:92] found "multinode-741097" server: "https://192.168.58.2:8443"
	I0116 03:45:13.975813  797521 api_server.go:166] Checking apiserver status ...
	I0116 03:45:13.975858  797521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:45:13.988523  797521 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup
	I0116 03:45:13.999308  797521 api_server.go:182] apiserver freezer: "4:freezer:/docker/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5/crio/crio-61ae06d6a2da632d93ca62f210eb7d103655bee624906892d446709118d16787"
	I0116 03:45:13.999377  797521 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/888bdee9b3912af7f1daa0ba33eed024d886f6d4bcbb0e38263793114ce465e5/crio/crio-61ae06d6a2da632d93ca62f210eb7d103655bee624906892d446709118d16787/freezer.state
	I0116 03:45:14.009942  797521 api_server.go:204] freezer state: "THAWED"
	I0116 03:45:14.009977  797521 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0116 03:45:14.019938  797521 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0116 03:45:14.019969  797521 status.go:421] multinode-741097 apiserver status = Running (err=<nil>)
	I0116 03:45:14.019980  797521 status.go:257] multinode-741097 status: &{Name:multinode-741097 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 03:45:14.019996  797521 status.go:255] checking status of multinode-741097-m02 ...
	I0116 03:45:14.020337  797521 cli_runner.go:164] Run: docker container inspect multinode-741097-m02 --format={{.State.Status}}
	I0116 03:45:14.037561  797521 status.go:330] multinode-741097-m02 host status = "Running" (err=<nil>)
	I0116 03:45:14.037590  797521 host.go:66] Checking if "multinode-741097-m02" exists ...
	I0116 03:45:14.037898  797521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-741097-m02
	I0116 03:45:14.061152  797521 host.go:66] Checking if "multinode-741097-m02" exists ...
	I0116 03:45:14.061458  797521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:45:14.061507  797521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-741097-m02
	I0116 03:45:14.079280  797521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/17967-719286/.minikube/machines/multinode-741097-m02/id_rsa Username:docker}
	I0116 03:45:14.174235  797521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:45:14.187512  797521 status.go:257] multinode-741097-m02 status: &{Name:multinode-741097-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 03:45:14.187547  797521 status.go:255] checking status of multinode-741097-m03 ...
	I0116 03:45:14.187854  797521 cli_runner.go:164] Run: docker container inspect multinode-741097-m03 --format={{.State.Status}}
	I0116 03:45:14.205406  797521 status.go:330] multinode-741097-m03 host status = "Stopped" (err=<nil>)
	I0116 03:45:14.205427  797521 status.go:343] host is not running, skipping remaining checks
	I0116 03:45:14.205436  797521 status.go:257] multinode-741097-m03 status: &{Name:multinode-741097-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-741097 node start m03 --alsologtostderr: (11.114861502s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-741097
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-741097
E0116 03:45:38.775981  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-741097: (24.94543517s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-741097 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-741097 --wait=true -v=8 --alsologtostderr: (1m37.128484458s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-741097
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-741097 node delete m03: (4.287974165s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-741097 stop: (23.73747674s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-741097 status: exit status 7 (112.554283ms)

                                                
                                                
-- stdout --
	multinode-741097
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-741097-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-741097 status --alsologtostderr: exit status 7 (103.474986ms)

                                                
                                                
-- stdout --
	multinode-741097
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-741097-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 03:47:57.313494  805655 out.go:296] Setting OutFile to fd 1 ...
	I0116 03:47:57.313679  805655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:47:57.313691  805655 out.go:309] Setting ErrFile to fd 2...
	I0116 03:47:57.313697  805655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:47:57.313999  805655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 03:47:57.314224  805655 out.go:303] Setting JSON to false
	I0116 03:47:57.314340  805655 mustload.go:65] Loading cluster: multinode-741097
	I0116 03:47:57.314428  805655 notify.go:220] Checking for updates...
	I0116 03:47:57.314820  805655 config.go:182] Loaded profile config "multinode-741097": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 03:47:57.314843  805655 status.go:255] checking status of multinode-741097 ...
	I0116 03:47:57.315379  805655 cli_runner.go:164] Run: docker container inspect multinode-741097 --format={{.State.Status}}
	I0116 03:47:57.334339  805655 status.go:330] multinode-741097 host status = "Stopped" (err=<nil>)
	I0116 03:47:57.334374  805655 status.go:343] host is not running, skipping remaining checks
	I0116 03:47:57.334382  805655 status.go:257] multinode-741097 status: &{Name:multinode-741097 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 03:47:57.334415  805655 status.go:255] checking status of multinode-741097-m02 ...
	I0116 03:47:57.334711  805655 cli_runner.go:164] Run: docker container inspect multinode-741097-m02 --format={{.State.Status}}
	I0116 03:47:57.351510  805655 status.go:330] multinode-741097-m02 host status = "Stopped" (err=<nil>)
	I0116 03:47:57.351532  805655 status.go:343] host is not running, skipping remaining checks
	I0116 03:47:57.351539  805655 status.go:257] multinode-741097-m02 status: &{Name:multinode-741097-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-741097 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0116 03:48:18.771673  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:48:23.275039  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-741097 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m22.688631356s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-741097 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-741097
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-741097-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-741097-m02 --driver=docker  --container-runtime=crio: exit status 14 (169.80769ms)

                                                
                                                
-- stdout --
	* [multinode-741097-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-741097-m02' is duplicated with machine name 'multinode-741097-m02' in profile 'multinode-741097'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-741097-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-741097-m03 --driver=docker  --container-runtime=crio: (34.374731603s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-741097
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-741097: exit status 80 (364.360472ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-741097
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-741097-m03 already exists in multinode-741097-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-741097-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-741097-m03: (2.003404117s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.00s)

                                                
                                    
x
+
TestPreload (164.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-170245 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0116 03:50:38.776088  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-170245 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m22.693301911s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-170245 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-170245 image pull gcr.io/k8s-minikube/busybox: (2.257620315s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-170245
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-170245: (5.780373031s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-170245 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0116 03:52:01.820104  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-170245 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m11.115193952s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-170245 image list
helpers_test.go:175: Cleaning up "test-preload-170245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-170245
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-170245: (2.351996069s)
--- PASS: TestPreload (164.46s)

                                                
                                    
x
+
TestScheduledStopUnix (106.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-609186 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-609186 --memory=2048 --driver=docker  --container-runtime=crio: (29.80450105s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-609186 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-609186 -n scheduled-stop-609186
E0116 03:53:18.771343  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-609186 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-609186 --cancel-scheduled
E0116 03:53:23.275206  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-609186 -n scheduled-stop-609186
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-609186
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-609186 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-609186
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-609186: exit status 7 (89.88545ms)

                                                
                                                
-- stdout --
	scheduled-stop-609186
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-609186 -n scheduled-stop-609186
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-609186 -n scheduled-stop-609186: exit status 7 (95.136485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-609186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-609186
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-609186: (5.381384478s)
--- PASS: TestScheduledStopUnix (106.94s)

                                                
                                    
x
+
TestInsufficientStorage (13.53s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-208289 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E0116 03:54:46.316260  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-208289 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.983990097s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf1e573b-a430-4259-9c40-445ef3dc3171","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-208289] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c9f7afb-6a50-4ae1-812c-2bdffaa106a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17967"}}
	{"specversion":"1.0","id":"5b9d76bf-5b14-4674-9c02-b48296195868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"04af1892-592c-40e5-91cf-e0fd5d6e6632","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig"}}
	{"specversion":"1.0","id":"b7ee0ed5-ba78-4fc4-b270-6de3c10c79af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube"}}
	{"specversion":"1.0","id":"fe2106c2-9671-4bcc-9af4-e2936f42a20b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cb4db94b-45f0-4d8a-be66-72d36695a917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0b256667-271e-4ae5-95d8-2bf1f02d5553","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8db0d91f-ac40-4ccc-b186-e8beb8ff4700","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5f099561-0713-43d7-8853-8810cd4a885a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e59730d-591c-4ad8-8f5c-313dae94507c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b01ce347-bbb3-4521-98d7-7f763660c2ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-208289 in cluster insufficient-storage-208289","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"71e3f60c-7671-4f8b-8d7e-fcfc8d49b61e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"08cc817a-132f-4800-a184-8dcb4f9a19cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"45060f41-994c-4f23-9c86-17420fee6286","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-208289 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-208289 --output=json --layout=cluster: exit status 7 (311.687456ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-208289","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-208289","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:54:46.776223  822129 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-208289" does not appear in /home/jenkins/minikube-integration/17967-719286/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-208289 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-208289 --output=json --layout=cluster: exit status 7 (309.383964ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-208289","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-208289","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 03:54:47.086040  822182 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-208289" does not appear in /home/jenkins/minikube-integration/17967-719286/kubeconfig
	E0116 03:54:47.097778  822182 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/insufficient-storage-208289/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-208289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-208289
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-208289: (1.919926686s)
--- PASS: TestInsufficientStorage (13.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (107.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2477639711 start -p running-upgrade-032274 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2477639711 start -p running-upgrade-032274 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.014875982s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-032274 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-032274 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.196034678s)
helpers_test.go:175: Cleaning up "running-upgrade-032274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-032274
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-032274: (2.9385466s)
--- PASS: TestRunningBinaryUpgrade (107.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (408.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-713383 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-713383 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.012962309s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-713383
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-713383: (3.556788325s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-713383 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-713383 status --format={{.Host}}: exit status 7 (169.670489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-713383 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-713383 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m46.897593949s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-713383 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-713383 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-713383 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (121.753386ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-713383] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-713383
	    minikube start -p kubernetes-upgrade-713383 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7133832 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-713383 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-713383 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-713383 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.989815108s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-713383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-713383
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-713383: (2.422718035s)
--- PASS: TestKubernetesUpgrade (408.30s)

                                                
                                    
x
+
TestMissingContainerUpgrade (149.97s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1295811988 start -p missing-upgrade-121414 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1295811988 start -p missing-upgrade-121414 --memory=2200 --driver=docker  --container-runtime=crio: (1m12.330713338s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-121414
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-121414: (10.408338599s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-121414
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-121414 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-121414 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.90053348s)
helpers_test.go:175: Cleaning up "missing-upgrade-121414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-121414
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-121414: (2.1560797s)
--- PASS: TestMissingContainerUpgrade (149.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-575768 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-575768 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (99.976334ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-575768] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-575768 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-575768 --driver=docker  --container-runtime=crio: (42.832474031s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-575768 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-575768 --no-kubernetes --driver=docker  --container-runtime=crio
E0116 03:55:38.778848  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-575768 --no-kubernetes --driver=docker  --container-runtime=crio: (25.630492433s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-575768 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-575768 status -o json: exit status 2 (407.659794ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-575768","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-575768
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-575768: (2.302987561s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-575768 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-575768 --no-kubernetes --driver=docker  --container-runtime=crio: (6.715051731s)
--- PASS: TestNoKubernetes/serial/Start (6.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-575768 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-575768 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.706959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (2.115681498s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (2.267890223s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-575768
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-575768: (1.232408054s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-575768 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-575768 --driver=docker  --container-runtime=crio: (6.867617284s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-575768 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-575768 "sudo systemctl is-active --quiet service kubelet": exit status 1 (312.733986ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (70.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1762616728 start -p stopped-upgrade-986584 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1762616728 start -p stopped-upgrade-986584 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.347655627s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1762616728 -p stopped-upgrade-986584 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1762616728 -p stopped-upgrade-986584 stop: (2.548664656s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-986584 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0116 03:58:18.771500  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 03:58:23.274475  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-986584 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.901408483s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (70.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-986584
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-986584: (1.07501529s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestPause/serial/Start (77.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-724375 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0116 04:00:38.776035  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 04:01:21.822760  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-724375 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m17.267481235s)
--- PASS: TestPause/serial/Start (77.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-724375 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-724375 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.008468816s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.05s)

                                                
                                    
x
+
TestPause/serial/Pause (1.27s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-724375 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-724375 --alsologtostderr -v=5: (1.274283375s)
--- PASS: TestPause/serial/Pause (1.27s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-724375 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-724375 --output=json --layout=cluster: exit status 2 (530.032652ms)

                                                
                                                
-- stdout --
	{"Name":"pause-724375","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-724375","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.53s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.12s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-724375 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-724375 --alsologtostderr -v=5: (1.120799999s)
--- PASS: TestPause/serial/Unpause (1.12s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.44s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-724375 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-724375 --alsologtostderr -v=5: (1.43490645s)
--- PASS: TestPause/serial/PauseAgain (1.44s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.41s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-724375 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-724375 --alsologtostderr -v=5: (3.410227976s)
--- PASS: TestPause/serial/DeletePaused (3.41s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-724375
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-724375: exit status 1 (17.35658ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-724375: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-081128 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-081128 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (342.907649ms)

                                                
                                                
-- stdout --
	* [false-081128] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 04:03:01.091373  860241 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:03:01.091591  860241 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:03:01.091617  860241 out.go:309] Setting ErrFile to fd 2...
	I0116 04:03:01.091637  860241 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:03:01.091998  860241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17967-719286/.minikube/bin
	I0116 04:03:01.092674  860241 out.go:303] Setting JSON to false
	I0116 04:03:01.093613  860241 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13530,"bootTime":1705364251,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0116 04:03:01.093708  860241 start.go:138] virtualization:  
	I0116 04:03:01.098985  860241 out.go:177] * [false-081128] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 04:03:01.101889  860241 notify.go:220] Checking for updates...
	I0116 04:03:01.102630  860241 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 04:03:01.105250  860241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:03:01.107529  860241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17967-719286/kubeconfig
	I0116 04:03:01.111368  860241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17967-719286/.minikube
	I0116 04:03:01.113539  860241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 04:03:01.115677  860241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:03:01.118413  860241 config.go:182] Loaded profile config "kubernetes-upgrade-713383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 04:03:01.118585  860241 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:03:01.156196  860241 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:03:01.156297  860241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:03:01.302663  860241 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-16 04:03:01.290718492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:03:01.302752  860241 docker.go:295] overlay module found
	I0116 04:03:01.305338  860241 out.go:177] * Using the docker driver based on user configuration
	I0116 04:03:01.307476  860241 start.go:298] selected driver: docker
	I0116 04:03:01.307488  860241 start.go:902] validating driver "docker" against <nil>
	I0116 04:03:01.307500  860241 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:03:01.310451  860241 out.go:177] 
	W0116 04:03:01.312755  860241 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0116 04:03:01.315709  860241 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-081128 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-081128" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 04:02:14 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-713383
contexts:
- context:
cluster: kubernetes-upgrade-713383
extensions:
- extension:
last-update: Tue, 16 Jan 2024 04:02:14 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-713383
name: kubernetes-upgrade-713383
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-713383
user:
client-certificate: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kubernetes-upgrade-713383/client.crt
client-key: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kubernetes-upgrade-713383/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-081128

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-081128"

                                                
                                                
----------------------- debugLogs end: false-081128 [took: 5.806362329s] --------------------------------
helpers_test.go:175: Cleaning up "false-081128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-081128
--- PASS: TestNetworkPlugins/group/false (6.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (119.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-683759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0116 04:05:38.776126  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-683759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m59.860882632s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (119.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-683759 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b25b2e35-df98-4d26-a753-66285bc7d64b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b25b2e35-df98-4d26-a753-66285bc7d64b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002641401s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-683759 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-683759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-683759 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-683759 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-683759 --alsologtostderr -v=3: (12.097660135s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-683759 -n old-k8s-version-683759
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-683759 -n old-k8s-version-683759: exit status 7 (84.98316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-683759 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (458.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-683759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-683759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m37.968912523s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-683759 -n old-k8s-version-683759
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (458.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-152607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0116 04:08:18.771926  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-152607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m4.566151823s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-152607 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8e022f6b-348c-446b-82b5-37d74c0fe775] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0116 04:08:23.275338  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
helpers_test.go:344: "busybox" [8e022f6b-348c-446b-82b5-37d74c0fe775] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003622005s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-152607 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-152607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-152607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022188503s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-152607 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-152607 --alsologtostderr -v=3
E0116 04:08:41.820717  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-152607 --alsologtostderr -v=3: (11.989697517s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-152607 -n no-preload-152607
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-152607 -n no-preload-152607: exit status 7 (83.482735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-152607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (346.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-152607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0116 04:10:38.776676  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 04:11:26.316765  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 04:13:18.771952  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 04:13:23.274412  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-152607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m45.998844938s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-152607 -n no-preload-152607
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (346.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nv5gp" [7b3b750f-acc9-47f9-a869-70764839b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nv5gp" [7b3b750f-acc9-47f9-a869-70764839b3fc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.00336629s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-md5mx" [d1a4580f-1049-4c28-a326-ddbb57f216f0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00316688s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-md5mx" [d1a4580f-1049-4c28-a326-ddbb57f216f0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003677465s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-683759 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nv5gp" [7b3b750f-acc9-47f9-a869-70764839b3fc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003873597s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-152607 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-683759 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-683759 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-683759 -n old-k8s-version-683759
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-683759 -n old-k8s-version-683759: exit status 2 (356.657814ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-683759 -n old-k8s-version-683759
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-683759 -n old-k8s-version-683759: exit status 2 (354.389719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-683759 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-683759 -n old-k8s-version-683759
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-683759 -n old-k8s-version-683759
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-152607 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-152607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-152607 --alsologtostderr -v=1: (1.464968652s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-152607 -n no-preload-152607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-152607 -n no-preload-152607: exit status 2 (445.059693ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-152607 -n no-preload-152607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-152607 -n no-preload-152607: exit status 2 (502.559419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-152607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-152607 --alsologtostderr -v=1: (1.138146441s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-152607 -n no-preload-152607
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-152607 -n no-preload-152607
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-710964 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-710964 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m24.907077233s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-764011 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 04:15:38.776195  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-764011 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m24.816324474s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-710964 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [47305166-5243-45ea-838c-da16bb7f3f25] Pending
helpers_test.go:344: "busybox" [47305166-5243-45ea-838c-da16bb7f3f25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [47305166-5243-45ea-838c-da16bb7f3f25] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003146023s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-710964 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-764011 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1ce06659-ec20-44fc-989a-2f8548a4a930] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1ce06659-ec20-44fc-989a-2f8548a4a930] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003419518s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-764011 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-710964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-710964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067477213s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-710964 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-710964 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-710964 --alsologtostderr -v=3: (12.004839099s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-764011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0116 04:16:32.092409  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:16:32.098044  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:16:32.108260  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:16:32.128480  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:16:32.168710  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:16:32.248944  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:16:32.409263  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-764011 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.056002917s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-764011 describe deploy/metrics-server -n kube-system
E0116 04:16:32.729750  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-764011 --alsologtostderr -v=3
E0116 04:16:33.370106  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:16:34.650820  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:16:37.211525  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-764011 --alsologtostderr -v=3: (11.98929537s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-710964 -n embed-certs-710964
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-710964 -n embed-certs-710964: exit status 7 (90.460914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-710964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (626.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-710964 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 04:16:42.332098  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-710964 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m26.444758353s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-710964 -n embed-certs-710964
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (626.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-764011 -n default-k8s-diff-port-764011
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-764011 -n default-k8s-diff-port-764011: exit status 7 (107.390878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-764011 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (355.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-764011 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 04:16:52.573182  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:17:13.054159  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:17:54.014364  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:18:01.823802  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 04:18:18.771942  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 04:18:20.452902  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:20.458390  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:20.468602  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:20.488839  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:20.529196  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:20.609460  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:20.769807  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:21.090316  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:21.731273  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:23.012283  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:23.274809  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
E0116 04:18:25.572644  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:30.692895  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:18:40.933748  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:19:01.414903  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:19:15.934574  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:19:42.375912  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:20:38.776086  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 04:21:04.296280  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:21:32.092377  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:21:59.774989  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-764011 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m55.260769443s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-764011 -n default-k8s-diff-port-764011
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (355.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-stdd7" [f1dfc59e-36c7-4a39-b442-114b1f0a3651] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-stdd7" [f1dfc59e-36c7-4a39-b442-114b1f0a3651] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004333165s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-stdd7" [f1dfc59e-36c7-4a39-b442-114b1f0a3651] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004438644s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-764011 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-764011 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-764011 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-764011 -n default-k8s-diff-port-764011
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-764011 -n default-k8s-diff-port-764011: exit status 2 (358.673378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-764011 -n default-k8s-diff-port-764011
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-764011 -n default-k8s-diff-port-764011: exit status 2 (364.054641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-764011 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-764011 -n default-k8s-diff-port-764011
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-764011 -n default-k8s-diff-port-764011
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-552460 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0116 04:23:18.771493  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/addons-005301/client.crt: no such file or directory
E0116 04:23:20.453546  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
E0116 04:23:23.275345  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-552460 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (43.788400028s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-552460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-552460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032960772s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-552460 --alsologtostderr -v=3
E0116 04:23:48.137211  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/no-preload-152607/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-552460 --alsologtostderr -v=3: (1.248426855s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-552460 -n newest-cni-552460
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-552460 -n newest-cni-552460: exit status 7 (85.608363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-552460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-552460 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-552460 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (30.904361707s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-552460 -n newest-cni-552460
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-552460 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-552460 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-552460 -n newest-cni-552460
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-552460 -n newest-cni-552460: exit status 2 (353.879403ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-552460 -n newest-cni-552460
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-552460 -n newest-cni-552460: exit status 2 (380.159525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-552460 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-552460 -n newest-cni-552460
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-552460 -n newest-cni-552460
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0116 04:25:21.821340  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 04:25:38.776057  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m17.552657737s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-081128 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-081128 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x6ktw" [88eb1a48-878a-4f4c-a5a7-9e5f76b71de0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x6ktw" [88eb1a48-878a-4f4c-a5a7-9e5f76b71de0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00361049s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-081128 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0116 04:26:21.481909  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:21.487091  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:21.497382  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:21.517630  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:21.557861  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:21.638192  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:21.798553  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:22.119463  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:22.760420  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:24.040705  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:26.601194  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:31.722255  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:26:32.092256  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:26:41.962905  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
E0116 04:27:02.444018  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.96330488s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4qxdw" [6b59c5d6-52b0-4907-b66f-b8fbbcfa6463] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003278061s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4qxdw" [6b59c5d6-52b0-4907-b66f-b8fbbcfa6463] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004294333s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-710964 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-710964 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-710964 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-710964 -n embed-certs-710964
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-710964 -n embed-certs-710964: exit status 2 (365.481808ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-710964 -n embed-certs-710964
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-710964 -n embed-certs-710964: exit status 2 (368.136911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-710964 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-710964 -n embed-certs-710964
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-710964 -n embed-certs-710964
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.37s)
E0116 04:32:36.383503  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:36.388981  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:36.399198  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:36.419449  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:36.459717  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:36.540143  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:36.700553  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:37.020835  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:37.661274  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:38.941466  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:41.502188  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:46.623035  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory
E0116 04:32:55.135298  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
E0116 04:32:56.863804  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kindnet-081128/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m19.592024071s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lrjj7" [322637e0-fc2a-4a46-a933-d1590b7e41d0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005118584s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-081128 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-081128 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v7x8j" [105a8d51-f058-483d-b375-0ea106b99e79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 04:27:43.404574  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-v7x8j" [105a8d51-f058-483d-b375-0ea106b99e79] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004293491s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-081128 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0116 04:28:23.276207  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/ingress-addon-legacy-194312/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m14.714692073s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ds5hv" [f62ed2fe-b093-45b6-b71c-fd07f8a72eeb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00556882s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-081128 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-081128 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h2hq7" [eb033ddb-86cb-4947-9dd7-bb22f2130375] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h2hq7" [eb033ddb-86cb-4947-9dd7-bb22f2130375] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006147512s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-081128 exec deployment/netcat -- nslookup kubernetes.default
E0116 04:29:05.324702  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/default-k8s-diff-port-764011/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m31.162575661s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-081128 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-081128 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kdnf2" [999d886c-3308-490d-be0a-7d91f0d8d8e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kdnf2" [999d886c-3308-490d-be0a-7d91f0d8d8e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004254917s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-081128 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0116 04:30:38.776051  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/functional-983329/client.crt: no such file or directory
E0116 04:30:44.352888  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:44.358117  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:44.368334  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:44.388583  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:44.428878  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:44.509198  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:44.669547  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:44.990542  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:45.631649  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:46.912589  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:49.473701  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
E0116 04:30:54.594232  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m8.65866636s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-081128 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-081128 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rsl8l" [5a7fea92-e16e-4eea-ba9f-7e947f58f922] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 04:31:04.834799  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-rsl8l" [5a7fea92-e16e-4eea-ba9f-7e947f58f922] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003939267s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-081128 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pr4k4" [b5caf5bb-9e07-40e9-b1bc-02f6eda192e2] Running
E0116 04:31:25.315684  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/auto-081128/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007876405s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-081128 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-081128 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fg4q5" [fd7ae9a5-8b12-4ff5-84e3-afbbea3f491c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 04:31:32.097510  724621 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/old-k8s-version-683759/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fg4q5" [fd7ae9a5-8b12-4ff5-84e3-afbbea3f491c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003408393s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-081128 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m27.100976612s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-081128 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-081128 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-081128 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zzltt" [d2728404-d3c6-4a56-8edd-66b07443ecc9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zzltt" [d2728404-d3c6-4a56-8edd-66b07443ecc9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003410612s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-081128 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-081128 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (32/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.63s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-116937 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-116937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-116937
--- SKIP: TestDownloadOnlyKic (0.63s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-965261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-965261
--- SKIP: TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-081128 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-081128" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17967-719286/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 04:02:14 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-713383
contexts:
- context:
cluster: kubernetes-upgrade-713383
extensions:
- extension:
last-update: Tue, 16 Jan 2024 04:02:14 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-713383
name: kubernetes-upgrade-713383
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-713383
user:
client-certificate: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kubernetes-upgrade-713383/client.crt
client-key: /home/jenkins/minikube-integration/17967-719286/.minikube/profiles/kubernetes-upgrade-713383/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-081128

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-081128"

                                                
                                                
----------------------- debugLogs end: kubenet-081128 [took: 5.423464182s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-081128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-081128
--- SKIP: TestNetworkPlugins/group/kubenet (5.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-081128 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-081128" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-081128

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-081128" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081128"

                                                
                                                
----------------------- debugLogs end: cilium-081128 [took: 5.822779442s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-081128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-081128
--- SKIP: TestNetworkPlugins/group/cilium (6.03s)

                                                
                                    
Copied to clipboard