Test Report: Docker_Linux_crio_arm64 17965

                    
                      5e5f17cf679477cd200ce76c4e9747d73049443e:2024-01-16:32726
                    
                

Test fail (4/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 168.37
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.42
171 TestIngressAddonLegacy/serial/ValidateIngressAddons 177.85
221 TestMultiNode/serial/PingHostFrom2Pods 3.97
x
+
TestAddons/parallel/Ingress (168.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-775662 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-775662 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-775662 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [975d8fe7-d894-4c6d-b230-bda3b9196f20] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [975d8fe7-d894-4c6d-b230-bda3b9196f20] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005227085s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-775662 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.968124705s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-775662 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.059167119s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-775662 addons disable ingress-dns --alsologtostderr -v=1: (1.200188839s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-775662 addons disable ingress --alsologtostderr -v=1: (7.807286553s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-775662
helpers_test.go:235: (dbg) docker inspect addons-775662:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3332017b9bfc9296f6e91ae27ee4be0e837e63a4eaf1333f2165a14ffb4bb87",
	        "Created": "2024-01-16T04:06:23.76204861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2422282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T04:06:24.088925091Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/f3332017b9bfc9296f6e91ae27ee4be0e837e63a4eaf1333f2165a14ffb4bb87/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3332017b9bfc9296f6e91ae27ee4be0e837e63a4eaf1333f2165a14ffb4bb87/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3332017b9bfc9296f6e91ae27ee4be0e837e63a4eaf1333f2165a14ffb4bb87/hosts",
	        "LogPath": "/var/lib/docker/containers/f3332017b9bfc9296f6e91ae27ee4be0e837e63a4eaf1333f2165a14ffb4bb87/f3332017b9bfc9296f6e91ae27ee4be0e837e63a4eaf1333f2165a14ffb4bb87-json.log",
	        "Name": "/addons-775662",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-775662:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-775662",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65028492a3673ed6775944c7a8e7631af5b5892c5a6517b051add9d9f57df0fd-init/diff:/var/lib/docker/overlay2/4fdef913b89fa4836b2db5064ca9b972974c59582e71c63616575ab943b0844e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65028492a3673ed6775944c7a8e7631af5b5892c5a6517b051add9d9f57df0fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65028492a3673ed6775944c7a8e7631af5b5892c5a6517b051add9d9f57df0fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65028492a3673ed6775944c7a8e7631af5b5892c5a6517b051add9d9f57df0fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-775662",
	                "Source": "/var/lib/docker/volumes/addons-775662/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-775662",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-775662",
	                "name.minikube.sigs.k8s.io": "addons-775662",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3fa0905f2acbe3107eec3df38868ba52f08e36afa7429a2e6d890544e134257c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35316"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35315"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35312"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35314"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35313"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3fa0905f2acb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-775662": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f3332017b9bf",
	                        "addons-775662"
	                    ],
	                    "NetworkID": "cae10f7897a32291b5c7d368aea81e7a809b7d825aa4868215239a5158671ab8",
	                    "EndpointID": "a84f7622242b0225ac79f215b921638dc3d4621fdb0b131d914b6f7211f94b79",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-775662 -n addons-775662
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-775662 logs -n 25: (1.683710393s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-320084                                                                     | download-only-320084   | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| delete  | -p download-only-925235                                                                     | download-only-925235   | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| delete  | -p download-only-859041                                                                     | download-only-859041   | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| start   | --download-only -p                                                                          | download-docker-565460 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |                     |
	|         | download-docker-565460                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-565460                                                                   | download-docker-565460 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-061244   | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |                     |
	|         | binary-mirror-061244                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41211                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-061244                                                                     | binary-mirror-061244   | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| addons  | disable dashboard -p                                                                        | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |                     |
	|         | addons-775662                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |                     |
	|         | addons-775662                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-775662 --wait=true                                                                | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-775662 ip                                                                            | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:09 UTC | 16 Jan 24 04:09 UTC |
	| addons  | addons-775662 addons disable                                                                | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:09 UTC | 16 Jan 24 04:09 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-775662 addons                                                                        | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:09 UTC | 16 Jan 24 04:09 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:09 UTC | 16 Jan 24 04:09 UTC |
	|         | addons-775662                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-775662 ssh curl -s                                                                   | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:09 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-775662 addons                                                                        | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:09 UTC | 16 Jan 24 04:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-775662 addons                                                                        | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:09 UTC | 16 Jan 24 04:09 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:09 UTC | 16 Jan 24 04:09 UTC |
	|         | -p addons-775662                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-775662 ssh cat                                                                       | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:10 UTC | 16 Jan 24 04:10 UTC |
	|         | /opt/local-path-provisioner/pvc-4b62521c-5878-4383-9538-7633795decd3_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-775662 addons disable                                                                | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:10 UTC | 16 Jan 24 04:10 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:10 UTC | 16 Jan 24 04:10 UTC |
	|         | addons-775662                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:10 UTC | 16 Jan 24 04:10 UTC |
	|         | -p addons-775662                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-775662 ip                                                                            | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:11 UTC | 16 Jan 24 04:11 UTC |
	| addons  | addons-775662 addons disable                                                                | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:11 UTC | 16 Jan 24 04:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-775662 addons disable                                                                | addons-775662          | jenkins | v1.32.0 | 16 Jan 24 04:11 UTC | 16 Jan 24 04:12 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 04:05:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 04:05:59.955988 2421826 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:05:59.956170 2421826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:05:59.956181 2421826 out.go:309] Setting ErrFile to fd 2...
	I0116 04:05:59.956188 2421826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:05:59.956453 2421826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:05:59.956919 2421826 out.go:303] Setting JSON to false
	I0116 04:05:59.957770 2421826 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38891,"bootTime":1705339069,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0116 04:05:59.957841 2421826 start.go:138] virtualization:  
	I0116 04:05:59.960250 2421826 out.go:177] * [addons-775662] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 04:05:59.962636 2421826 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 04:05:59.964625 2421826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:05:59.962795 2421826 notify.go:220] Checking for updates...
	I0116 04:05:59.966627 2421826 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:05:59.968676 2421826 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	I0116 04:05:59.970585 2421826 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 04:05:59.972279 2421826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:05:59.974340 2421826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:05:59.998012 2421826 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:05:59.998144 2421826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:06:00.118094 2421826 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-16 04:06:00.099581586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:06:00.118246 2421826 docker.go:295] overlay module found
	I0116 04:06:00.124895 2421826 out.go:177] * Using the docker driver based on user configuration
	I0116 04:06:00.127734 2421826 start.go:298] selected driver: docker
	I0116 04:06:00.127765 2421826 start.go:902] validating driver "docker" against <nil>
	I0116 04:06:00.127780 2421826 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:06:00.128542 2421826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:06:00.224019 2421826 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-16 04:06:00.212608557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:06:00.224194 2421826 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 04:06:00.224459 2421826 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 04:06:00.226675 2421826 out.go:177] * Using Docker driver with root privileges
	I0116 04:06:00.229285 2421826 cni.go:84] Creating CNI manager for ""
	I0116 04:06:00.229314 2421826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 04:06:00.229331 2421826 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 04:06:00.229351 2421826 start_flags.go:321] config:
	{Name:addons-775662 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-775662 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:06:00.233405 2421826 out.go:177] * Starting control plane node addons-775662 in cluster addons-775662
	I0116 04:06:00.236167 2421826 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 04:06:00.238497 2421826 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 04:06:00.241680 2421826 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:06:00.241753 2421826 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0116 04:06:00.241767 2421826 cache.go:56] Caching tarball of preloaded images
	I0116 04:06:00.241777 2421826 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 04:06:00.241898 2421826 preload.go:174] Found /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0116 04:06:00.241916 2421826 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 04:06:00.242309 2421826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/config.json ...
	I0116 04:06:00.242347 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/config.json: {Name:mk40e5171cd0ad50427c804276b3a8945b2cf10b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:00.261175 2421826 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 04:06:00.261332 2421826 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 04:06:00.261356 2421826 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 04:06:00.261363 2421826 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 04:06:00.261372 2421826 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 04:06:00.261378 2421826 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0116 04:06:16.057957 2421826 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0116 04:06:16.057998 2421826 cache.go:194] Successfully downloaded all kic artifacts
	I0116 04:06:16.058070 2421826 start.go:365] acquiring machines lock for addons-775662: {Name:mk289e1724e5b5455d3eee285abed22d35483102 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:06:16.058199 2421826 start.go:369] acquired machines lock for "addons-775662" in 105.926µs
	I0116 04:06:16.058234 2421826 start.go:93] Provisioning new machine with config: &{Name:addons-775662 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-775662 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 04:06:16.058310 2421826 start.go:125] createHost starting for "" (driver="docker")
	I0116 04:06:16.060910 2421826 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0116 04:06:16.061162 2421826 start.go:159] libmachine.API.Create for "addons-775662" (driver="docker")
	I0116 04:06:16.061192 2421826 client.go:168] LocalClient.Create starting
	I0116 04:06:16.061326 2421826 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem
	I0116 04:06:16.288258 2421826 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem
	I0116 04:06:17.397830 2421826 cli_runner.go:164] Run: docker network inspect addons-775662 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 04:06:17.418157 2421826 cli_runner.go:211] docker network inspect addons-775662 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 04:06:17.418244 2421826 network_create.go:281] running [docker network inspect addons-775662] to gather additional debugging logs...
	I0116 04:06:17.418271 2421826 cli_runner.go:164] Run: docker network inspect addons-775662
	W0116 04:06:17.437884 2421826 cli_runner.go:211] docker network inspect addons-775662 returned with exit code 1
	I0116 04:06:17.437916 2421826 network_create.go:284] error running [docker network inspect addons-775662]: docker network inspect addons-775662: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-775662 not found
	I0116 04:06:17.437929 2421826 network_create.go:286] output of [docker network inspect addons-775662]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-775662 not found
	
	** /stderr **
	I0116 04:06:17.438023 2421826 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 04:06:17.455979 2421826 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400252ef00}
	I0116 04:06:17.456041 2421826 network_create.go:124] attempt to create docker network addons-775662 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0116 04:06:17.456100 2421826 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-775662 addons-775662
	I0116 04:06:17.535755 2421826 network_create.go:108] docker network addons-775662 192.168.49.0/24 created
	I0116 04:06:17.535790 2421826 kic.go:121] calculated static IP "192.168.49.2" for the "addons-775662" container
	I0116 04:06:17.535860 2421826 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 04:06:17.557072 2421826 cli_runner.go:164] Run: docker volume create addons-775662 --label name.minikube.sigs.k8s.io=addons-775662 --label created_by.minikube.sigs.k8s.io=true
	I0116 04:06:17.575512 2421826 oci.go:103] Successfully created a docker volume addons-775662
	I0116 04:06:17.575602 2421826 cli_runner.go:164] Run: docker run --rm --name addons-775662-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-775662 --entrypoint /usr/bin/test -v addons-775662:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 04:06:19.443858 2421826 cli_runner.go:217] Completed: docker run --rm --name addons-775662-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-775662 --entrypoint /usr/bin/test -v addons-775662:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.868214575s)
	I0116 04:06:19.443887 2421826 oci.go:107] Successfully prepared a docker volume addons-775662
	I0116 04:06:19.443908 2421826 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:06:19.443927 2421826 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 04:06:19.444006 2421826 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-775662:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 04:06:23.680453 2421826 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-775662:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.236394308s)
	I0116 04:06:23.680484 2421826 kic.go:203] duration metric: took 4.236554 seconds to extract preloaded images to volume
	W0116 04:06:23.680624 2421826 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 04:06:23.680735 2421826 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 04:06:23.746226 2421826 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-775662 --name addons-775662 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-775662 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-775662 --network addons-775662 --ip 192.168.49.2 --volume addons-775662:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 04:06:24.097301 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Running}}
	I0116 04:06:24.128651 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:24.151816 2421826 cli_runner.go:164] Run: docker exec addons-775662 stat /var/lib/dpkg/alternatives/iptables
	I0116 04:06:24.222681 2421826 oci.go:144] the created container "addons-775662" has a running status.
	I0116 04:06:24.222718 2421826 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa...
	I0116 04:06:25.019613 2421826 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 04:06:25.049288 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:25.069878 2421826 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 04:06:25.069899 2421826 kic_runner.go:114] Args: [docker exec --privileged addons-775662 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 04:06:25.149638 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:25.189004 2421826 machine.go:88] provisioning docker machine ...
	I0116 04:06:25.189035 2421826 ubuntu.go:169] provisioning hostname "addons-775662"
	I0116 04:06:25.189145 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:25.221118 2421826 main.go:141] libmachine: Using SSH client type: native
	I0116 04:06:25.221551 2421826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35316 <nil> <nil>}
	I0116 04:06:25.221572 2421826 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-775662 && echo "addons-775662" | sudo tee /etc/hostname
	I0116 04:06:25.380234 2421826 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-775662
	
	I0116 04:06:25.380315 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:25.398562 2421826 main.go:141] libmachine: Using SSH client type: native
	I0116 04:06:25.398976 2421826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35316 <nil> <nil>}
	I0116 04:06:25.398994 2421826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-775662' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-775662/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-775662' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 04:06:25.533657 2421826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 04:06:25.533686 2421826 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17965-2415678/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-2415678/.minikube}
	I0116 04:06:25.533707 2421826 ubuntu.go:177] setting up certificates
	I0116 04:06:25.533718 2421826 provision.go:83] configureAuth start
	I0116 04:06:25.533777 2421826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-775662
	I0116 04:06:25.552448 2421826 provision.go:138] copyHostCerts
	I0116 04:06:25.552532 2421826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem (1078 bytes)
	I0116 04:06:25.552672 2421826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem (1123 bytes)
	I0116 04:06:25.552781 2421826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem (1679 bytes)
	I0116 04:06:25.552852 2421826 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem org=jenkins.addons-775662 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-775662]
	I0116 04:06:25.857866 2421826 provision.go:172] copyRemoteCerts
	I0116 04:06:25.857963 2421826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 04:06:25.858016 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:25.878912 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:06:25.980189 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 04:06:26.012452 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 04:06:26.043677 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 04:06:26.072862 2421826 provision.go:86] duration metric: configureAuth took 539.129717ms
	I0116 04:06:26.072891 2421826 ubuntu.go:193] setting minikube options for container-runtime
	I0116 04:06:26.073105 2421826 config.go:182] Loaded profile config "addons-775662": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:06:26.073236 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:26.092345 2421826 main.go:141] libmachine: Using SSH client type: native
	I0116 04:06:26.092832 2421826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35316 <nil> <nil>}
	I0116 04:06:26.092855 2421826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 04:06:26.346829 2421826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 04:06:26.346859 2421826 machine.go:91] provisioned docker machine in 1.157834094s
	I0116 04:06:26.346880 2421826 client.go:171] LocalClient.Create took 10.285668398s
	I0116 04:06:26.346896 2421826 start.go:167] duration metric: libmachine.API.Create for "addons-775662" took 10.285734604s
	I0116 04:06:26.346910 2421826 start.go:300] post-start starting for "addons-775662" (driver="docker")
	I0116 04:06:26.346922 2421826 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 04:06:26.346999 2421826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 04:06:26.347058 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:26.365340 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:06:26.464339 2421826 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 04:06:26.468623 2421826 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 04:06:26.468661 2421826 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 04:06:26.468682 2421826 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 04:06:26.468690 2421826 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 04:06:26.468701 2421826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-2415678/.minikube/addons for local assets ...
	I0116 04:06:26.468802 2421826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-2415678/.minikube/files for local assets ...
	I0116 04:06:26.468835 2421826 start.go:303] post-start completed in 121.917864ms
	I0116 04:06:26.469149 2421826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-775662
	I0116 04:06:26.487976 2421826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/config.json ...
	I0116 04:06:26.488339 2421826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 04:06:26.488402 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:26.506819 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:06:26.607373 2421826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 04:06:26.613754 2421826 start.go:128] duration metric: createHost completed in 10.555426118s
	I0116 04:06:26.613781 2421826 start.go:83] releasing machines lock for "addons-775662", held for 10.555565913s
	I0116 04:06:26.613872 2421826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-775662
	I0116 04:06:26.637176 2421826 ssh_runner.go:195] Run: cat /version.json
	I0116 04:06:26.637232 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:26.637291 2421826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 04:06:26.637367 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:26.658619 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:06:26.660672 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:06:26.753465 2421826 ssh_runner.go:195] Run: systemctl --version
	I0116 04:06:26.890500 2421826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 04:06:27.041661 2421826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 04:06:27.048353 2421826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:06:27.075032 2421826 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 04:06:27.075171 2421826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:06:27.119205 2421826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 04:06:27.119281 2421826 start.go:475] detecting cgroup driver to use...
	I0116 04:06:27.119329 2421826 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 04:06:27.119409 2421826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 04:06:27.139219 2421826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 04:06:27.154682 2421826 docker.go:217] disabling cri-docker service (if available) ...
	I0116 04:06:27.154754 2421826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 04:06:27.172395 2421826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 04:06:27.189357 2421826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 04:06:27.289225 2421826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 04:06:27.401625 2421826 docker.go:233] disabling docker service ...
	I0116 04:06:27.401739 2421826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 04:06:27.425382 2421826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 04:06:27.439558 2421826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 04:06:27.540774 2421826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 04:06:27.649155 2421826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 04:06:27.663547 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 04:06:27.683583 2421826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 04:06:27.683719 2421826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:06:27.696245 2421826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 04:06:27.696325 2421826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:06:27.708320 2421826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:06:27.721115 2421826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:06:27.732740 2421826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 04:06:27.743707 2421826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 04:06:27.753797 2421826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 04:06:27.764015 2421826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 04:06:27.861162 2421826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 04:06:27.978139 2421826 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 04:06:27.978226 2421826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 04:06:27.982879 2421826 start.go:543] Will wait 60s for crictl version
	I0116 04:06:27.983000 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:06:27.987698 2421826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 04:06:28.032077 2421826 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 04:06:28.032246 2421826 ssh_runner.go:195] Run: crio --version
	I0116 04:06:28.079160 2421826 ssh_runner.go:195] Run: crio --version
	I0116 04:06:28.130989 2421826 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0116 04:06:28.133053 2421826 cli_runner.go:164] Run: docker network inspect addons-775662 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 04:06:28.152060 2421826 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0116 04:06:28.156825 2421826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:06:28.170529 2421826 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:06:28.170596 2421826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:06:28.237314 2421826 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 04:06:28.237339 2421826 crio.go:415] Images already preloaded, skipping extraction
	I0116 04:06:28.237395 2421826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:06:28.283539 2421826 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 04:06:28.283560 2421826 cache_images.go:84] Images are preloaded, skipping loading
	I0116 04:06:28.283635 2421826 ssh_runner.go:195] Run: crio config
	I0116 04:06:28.337590 2421826 cni.go:84] Creating CNI manager for ""
	I0116 04:06:28.337614 2421826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 04:06:28.337660 2421826 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 04:06:28.337686 2421826 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-775662 NodeName:addons-775662 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 04:06:28.337837 2421826 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-775662"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 04:06:28.337897 2421826 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-775662 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-775662 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 04:06:28.337971 2421826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 04:06:28.348815 2421826 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 04:06:28.348917 2421826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 04:06:28.359608 2421826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0116 04:06:28.381230 2421826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 04:06:28.403572 2421826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0116 04:06:28.425483 2421826 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0116 04:06:28.430104 2421826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:06:28.443807 2421826 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662 for IP: 192.168.49.2
	I0116 04:06:28.443842 2421826 certs.go:190] acquiring lock for shared ca certs: {Name:mkfc28b038850f5c4d343916ed6224daf2d0b70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:28.443965 2421826 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key
	I0116 04:06:28.881523 2421826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt ...
	I0116 04:06:28.881565 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt: {Name:mk23f0f7038cf2787b154e2dd47c1960b00f9de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:28.881808 2421826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key ...
	I0116 04:06:28.881826 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key: {Name:mk2dcdd07ec86671a96e357e80201ade5a3a5063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:28.881946 2421826 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key
	I0116 04:06:29.898730 2421826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.crt ...
	I0116 04:06:29.898767 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.crt: {Name:mkf96a9c564ee9aee9cb59cae9a9d65ac3e7ce77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:29.898969 2421826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key ...
	I0116 04:06:29.898984 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key: {Name:mkaa3ba7af2e4c23251f9a9f188c163097fae6f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:29.899701 2421826 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.key
	I0116 04:06:29.899728 2421826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt with IP's: []
	I0116 04:06:30.648456 2421826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt ...
	I0116 04:06:30.648488 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: {Name:mk0b1bc13271e58182cf0bd5f2b417b35336e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:30.649209 2421826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.key ...
	I0116 04:06:30.649226 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.key: {Name:mk6fa66010fc3fc3db36b29712e88ca990b3baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:30.649318 2421826 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.key.dd3b5fb2
	I0116 04:06:30.649338 2421826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 04:06:31.285378 2421826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.crt.dd3b5fb2 ...
	I0116 04:06:31.285410 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.crt.dd3b5fb2: {Name:mk074fb97f04caf1c1a49df55c33a0ad406963f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:31.285598 2421826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.key.dd3b5fb2 ...
	I0116 04:06:31.285614 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.key.dd3b5fb2: {Name:mkdb1382fa46eb06b0fe51c5ff4f3be50a0dfdd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:31.286105 2421826 certs.go:337] copying /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.crt
	I0116 04:06:31.286186 2421826 certs.go:341] copying /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.key
	I0116 04:06:31.286237 2421826 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/proxy-client.key
	I0116 04:06:31.286258 2421826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/proxy-client.crt with IP's: []
	I0116 04:06:31.508606 2421826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/proxy-client.crt ...
	I0116 04:06:31.508640 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/proxy-client.crt: {Name:mk6830fc2a822501be15e7142ba0ecab2ff8bf73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:31.509341 2421826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/proxy-client.key ...
	I0116 04:06:31.509361 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/proxy-client.key: {Name:mk3b3a496ba5b6d6cbc5a82d64b0634463ec13c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:31.509558 2421826 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 04:06:31.509612 2421826 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem (1078 bytes)
	I0116 04:06:31.509644 2421826 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem (1123 bytes)
	I0116 04:06:31.509674 2421826 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem (1679 bytes)
	I0116 04:06:31.510287 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 04:06:31.541935 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 04:06:31.571802 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 04:06:31.601332 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 04:06:31.629965 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 04:06:31.659297 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 04:06:31.688245 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 04:06:31.717604 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 04:06:31.747396 2421826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 04:06:31.776913 2421826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 04:06:31.798830 2421826 ssh_runner.go:195] Run: openssl version
	I0116 04:06:31.806529 2421826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 04:06:31.819427 2421826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:06:31.824180 2421826 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 04:06 /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:06:31.824246 2421826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:06:31.832886 2421826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 04:06:31.844840 2421826 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 04:06:31.849267 2421826 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 04:06:31.849355 2421826 kubeadm.go:404] StartCluster: {Name:addons-775662 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-775662 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:06:31.849449 2421826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 04:06:31.849505 2421826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 04:06:31.894713 2421826 cri.go:89] found id: ""
	I0116 04:06:31.894856 2421826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 04:06:31.906544 2421826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 04:06:31.917781 2421826 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 04:06:31.917873 2421826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 04:06:31.929714 2421826 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 04:06:31.929756 2421826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 04:06:31.987942 2421826 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 04:06:31.988182 2421826 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 04:06:32.037343 2421826 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 04:06:32.037416 2421826 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0116 04:06:32.037462 2421826 kubeadm.go:322] OS: Linux
	I0116 04:06:32.037512 2421826 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 04:06:32.037564 2421826 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 04:06:32.037613 2421826 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 04:06:32.037667 2421826 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 04:06:32.037719 2421826 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 04:06:32.037775 2421826 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 04:06:32.037824 2421826 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0116 04:06:32.037875 2421826 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0116 04:06:32.037921 2421826 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0116 04:06:32.123786 2421826 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 04:06:32.123895 2421826 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 04:06:32.123990 2421826 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 04:06:32.375144 2421826 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 04:06:32.379149 2421826 out.go:204]   - Generating certificates and keys ...
	I0116 04:06:32.379296 2421826 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 04:06:32.379391 2421826 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 04:06:33.380772 2421826 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 04:06:33.894117 2421826 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 04:06:34.279080 2421826 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 04:06:34.995277 2421826 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 04:06:35.885165 2421826 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 04:06:35.885498 2421826 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-775662 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 04:06:36.091769 2421826 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 04:06:36.092119 2421826 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-775662 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 04:06:36.427006 2421826 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 04:06:36.578077 2421826 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 04:06:36.744166 2421826 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 04:06:36.744437 2421826 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 04:06:37.115758 2421826 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 04:06:37.526926 2421826 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 04:06:38.190975 2421826 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 04:06:38.722021 2421826 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 04:06:38.722911 2421826 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 04:06:38.727906 2421826 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 04:06:38.730781 2421826 out.go:204]   - Booting up control plane ...
	I0116 04:06:38.730891 2421826 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 04:06:38.730974 2421826 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 04:06:38.732002 2421826 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 04:06:38.743538 2421826 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 04:06:38.743633 2421826 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 04:06:38.743671 2421826 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 04:06:38.868054 2421826 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 04:06:45.372709 2421826 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.504089 seconds
	I0116 04:06:45.372852 2421826 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 04:06:45.395388 2421826 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 04:06:45.930399 2421826 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 04:06:45.930602 2421826 kubeadm.go:322] [mark-control-plane] Marking the node addons-775662 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 04:06:46.442613 2421826 kubeadm.go:322] [bootstrap-token] Using token: qes5qa.or72epsazupzwyem
	I0116 04:06:46.444440 2421826 out.go:204]   - Configuring RBAC rules ...
	I0116 04:06:46.444556 2421826 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 04:06:46.451649 2421826 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 04:06:46.459564 2421826 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 04:06:46.463549 2421826 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 04:06:46.467346 2421826 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 04:06:46.471015 2421826 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 04:06:46.483664 2421826 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 04:06:46.719030 2421826 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 04:06:46.858822 2421826 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 04:06:46.859958 2421826 kubeadm.go:322] 
	I0116 04:06:46.860036 2421826 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 04:06:46.860048 2421826 kubeadm.go:322] 
	I0116 04:06:46.860121 2421826 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 04:06:46.860131 2421826 kubeadm.go:322] 
	I0116 04:06:46.860155 2421826 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 04:06:46.860215 2421826 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 04:06:46.860270 2421826 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 04:06:46.860278 2421826 kubeadm.go:322] 
	I0116 04:06:46.860329 2421826 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 04:06:46.860336 2421826 kubeadm.go:322] 
	I0116 04:06:46.860381 2421826 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 04:06:46.860391 2421826 kubeadm.go:322] 
	I0116 04:06:46.860445 2421826 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 04:06:46.860524 2421826 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 04:06:46.860597 2421826 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 04:06:46.860606 2421826 kubeadm.go:322] 
	I0116 04:06:46.860690 2421826 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 04:06:46.860806 2421826 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 04:06:46.860815 2421826 kubeadm.go:322] 
	I0116 04:06:46.860894 2421826 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qes5qa.or72epsazupzwyem \
	I0116 04:06:46.861005 2421826 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad \
	I0116 04:06:46.861032 2421826 kubeadm.go:322] 	--control-plane 
	I0116 04:06:46.861037 2421826 kubeadm.go:322] 
	I0116 04:06:46.861130 2421826 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 04:06:46.861140 2421826 kubeadm.go:322] 
	I0116 04:06:46.861217 2421826 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qes5qa.or72epsazupzwyem \
	I0116 04:06:46.861325 2421826 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad 
	I0116 04:06:46.865017 2421826 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 04:06:46.865130 2421826 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 04:06:46.865151 2421826 cni.go:84] Creating CNI manager for ""
	I0116 04:06:46.865162 2421826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 04:06:46.867412 2421826 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 04:06:46.869488 2421826 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 04:06:46.875543 2421826 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 04:06:46.875575 2421826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 04:06:46.929146 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 04:06:47.825838 2421826 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 04:06:47.825968 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:47.826068 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=addons-775662 minikube.k8s.io/updated_at=2024_01_16T04_06_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:47.842582 2421826 ops.go:34] apiserver oom_adj: -16
	I0116 04:06:47.946597 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:48.446887 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:48.946663 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:49.447556 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:49.947024 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:50.447640 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:50.947614 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:51.446877 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:51.947568 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:52.446932 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:52.947565 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:53.447093 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:53.947291 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:54.447138 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:54.946964 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:55.446695 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:55.946869 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:56.446696 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:56.946667 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:57.447403 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:57.946830 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:58.447098 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:58.947486 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:59.447461 2421826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:06:59.551588 2421826 kubeadm.go:1088] duration metric: took 11.725664777s to wait for elevateKubeSystemPrivileges.
	I0116 04:06:59.551615 2421826 kubeadm.go:406] StartCluster complete in 27.70226445s
	I0116 04:06:59.551633 2421826 settings.go:142] acquiring lock: {Name:mk66adae4842b25a93c5566bbfd72e0abd3ff5ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:59.551765 2421826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:06:59.552232 2421826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/kubeconfig: {Name:mk62b61676cf27f7a957a454bc1b05d015789bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:06:59.554622 2421826 config.go:182] Loaded profile config "addons-775662": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:06:59.554680 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 04:06:59.554835 2421826 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 04:06:59.554923 2421826 addons.go:69] Setting yakd=true in profile "addons-775662"
	I0116 04:06:59.554941 2421826 addons.go:234] Setting addon yakd=true in "addons-775662"
	I0116 04:06:59.555006 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.555533 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.556169 2421826 addons.go:69] Setting inspektor-gadget=true in profile "addons-775662"
	I0116 04:06:59.556191 2421826 addons.go:234] Setting addon inspektor-gadget=true in "addons-775662"
	I0116 04:06:59.556222 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.556619 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.558473 2421826 addons.go:69] Setting cloud-spanner=true in profile "addons-775662"
	I0116 04:06:59.558492 2421826 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-775662"
	I0116 04:06:59.558501 2421826 addons.go:234] Setting addon cloud-spanner=true in "addons-775662"
	I0116 04:06:59.558511 2421826 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-775662"
	I0116 04:06:59.558536 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.558557 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.558936 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.559038 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.561691 2421826 addons.go:69] Setting registry=true in profile "addons-775662"
	I0116 04:06:59.561715 2421826 addons.go:234] Setting addon registry=true in "addons-775662"
	I0116 04:06:59.561770 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.562198 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.558479 2421826 addons.go:69] Setting metrics-server=true in profile "addons-775662"
	I0116 04:06:59.572855 2421826 addons.go:234] Setting addon metrics-server=true in "addons-775662"
	I0116 04:06:59.572932 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.573430 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.580957 2421826 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-775662"
	I0116 04:06:59.581022 2421826 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-775662"
	I0116 04:06:59.581065 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.581540 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.587214 2421826 addons.go:69] Setting storage-provisioner=true in profile "addons-775662"
	I0116 04:06:59.587251 2421826 addons.go:234] Setting addon storage-provisioner=true in "addons-775662"
	I0116 04:06:59.587299 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.587771 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.596988 2421826 addons.go:69] Setting default-storageclass=true in profile "addons-775662"
	I0116 04:06:59.597037 2421826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-775662"
	I0116 04:06:59.597387 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.605552 2421826 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-775662"
	I0116 04:06:59.605586 2421826 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-775662"
	I0116 04:06:59.605928 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.624587 2421826 addons.go:69] Setting gcp-auth=true in profile "addons-775662"
	I0116 04:06:59.624632 2421826 mustload.go:65] Loading cluster: addons-775662
	I0116 04:06:59.624868 2421826 config.go:182] Loaded profile config "addons-775662": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:06:59.625141 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.634078 2421826 addons.go:69] Setting volumesnapshots=true in profile "addons-775662"
	I0116 04:06:59.634108 2421826 addons.go:234] Setting addon volumesnapshots=true in "addons-775662"
	I0116 04:06:59.634155 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.634666 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.648192 2421826 addons.go:69] Setting ingress=true in profile "addons-775662"
	I0116 04:06:59.648228 2421826 addons.go:234] Setting addon ingress=true in "addons-775662"
	I0116 04:06:59.648287 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.648762 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.701298 2421826 addons.go:69] Setting ingress-dns=true in profile "addons-775662"
	I0116 04:06:59.701334 2421826 addons.go:234] Setting addon ingress-dns=true in "addons-775662"
	I0116 04:06:59.701393 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.701873 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.820985 2421826 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 04:06:59.823848 2421826 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 04:06:59.823914 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 04:06:59.824019 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:59.837959 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W0116 04:06:59.839335 2421826 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-775662" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0116 04:06:59.840689 2421826 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0116 04:06:59.840874 2421826 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 04:06:59.814125 2421826 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-775662"
	I0116 04:06:59.841433 2421826 addons.go:234] Setting addon default-storageclass=true in "addons-775662"
	I0116 04:06:59.845807 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.846365 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.858101 2421826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 04:06:59.863544 2421826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 04:06:59.865200 2421826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 04:06:59.864843 2421826 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 04:06:59.909129 2421826 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 04:06:59.909168 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 04:06:59.909258 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:59.864852 2421826 out.go:177] * Verifying Kubernetes components...
	I0116 04:06:59.864862 2421826 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 04:06:59.864868 2421826 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 04:06:59.864872 2421826 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 04:06:59.864913 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.916981 2421826 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 04:06:59.917524 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:06:59.925653 2421826 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 04:06:59.925661 2421826 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 04:06:59.925665 2421826 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 04:06:59.925681 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 04:06:59.928029 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:59.961464 2421826 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 04:06:59.963356 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 04:06:59.963423 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:59.961527 2421826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:06:59.961551 2421826 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 04:06:59.964674 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 04:06:59.966264 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:59.970924 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:06:59.974113 2421826 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 04:06:59.962541 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:06:59.977202 2421826 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 04:06:59.977250 2421826 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 04:06:59.979485 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 04:06:59.979578 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:59.986593 2421826 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 04:06:59.984673 2421826 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 04:06:59.984703 2421826 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 04:06:59.984712 2421826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 04:06:59.984721 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 04:06:59.989890 2421826 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 04:06:59.991068 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 04:06:59.991148 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:06:59.993307 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 04:06:59.993391 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:07:00.003539 2421826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 04:07:00.002080 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:07:00.002476 2421826 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 04:07:00.002493 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 04:07:00.005619 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:07:00.039700 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 04:07:00.039784 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:07:00.055575 2421826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 04:07:00.065034 2421826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 04:07:00.067156 2421826 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 04:07:00.073217 2421826 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 04:07:00.075263 2421826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 04:07:00.076897 2421826 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 04:07:00.082305 2421826 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 04:07:00.082339 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 04:07:00.082421 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:07:00.206121 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.214353 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.245673 2421826 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 04:07:00.248712 2421826 out.go:177]   - Using image docker.io/busybox:stable
	I0116 04:07:00.251315 2421826 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 04:07:00.251341 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 04:07:00.251415 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:07:00.264018 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.320884 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.332530 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.336571 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.357397 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.369132 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.387832 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.399099 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.412160 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:00.413576 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	W0116 04:07:00.414567 2421826 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0116 04:07:00.414589 2421826 retry.go:31] will retry after 345.216747ms: ssh: handshake failed: EOF
	I0116 04:07:00.625183 2421826 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 04:07:00.625275 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 04:07:00.655901 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 04:07:00.768194 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 04:07:00.778706 2421826 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 04:07:00.778796 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 04:07:00.807267 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 04:07:00.864413 2421826 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 04:07:00.864483 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 04:07:00.866439 2421826 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 04:07:00.866498 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 04:07:00.879384 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 04:07:00.887841 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 04:07:00.892069 2421826 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 04:07:00.892129 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 04:07:00.898238 2421826 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 04:07:00.898314 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 04:07:00.908535 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 04:07:00.989817 2421826 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 04:07:00.989884 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 04:07:00.995050 2421826 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 04:07:00.995119 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 04:07:01.035337 2421826 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 04:07:01.035422 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 04:07:01.093406 2421826 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 04:07:01.093479 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 04:07:01.098379 2421826 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 04:07:01.098455 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 04:07:01.128408 2421826 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 04:07:01.128485 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 04:07:01.192826 2421826 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 04:07:01.192896 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 04:07:01.203090 2421826 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 04:07:01.203165 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 04:07:01.221623 2421826 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 04:07:01.221697 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 04:07:01.305679 2421826 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 04:07:01.305753 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 04:07:01.340195 2421826 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 04:07:01.340264 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 04:07:01.340528 2421826 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 04:07:01.340565 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 04:07:01.351849 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 04:07:01.400206 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 04:07:01.441770 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 04:07:01.455115 2421826 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 04:07:01.455190 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 04:07:01.511920 2421826 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 04:07:01.511990 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 04:07:01.535194 2421826 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 04:07:01.535264 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 04:07:01.609503 2421826 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 04:07:01.609580 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 04:07:01.707692 2421826 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 04:07:01.707768 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 04:07:01.734046 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 04:07:01.737578 2421826 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 04:07:01.737656 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 04:07:01.845596 2421826 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 04:07:01.845669 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 04:07:01.942617 2421826 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 04:07:01.942684 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 04:07:01.947499 2421826 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 04:07:01.947573 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 04:07:02.031833 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 04:07:02.085289 2421826 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 04:07:02.085352 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 04:07:02.087763 2421826 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.249224886s)
	I0116 04:07:02.087839 2421826 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0116 04:07:02.087879 2421826 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.123297972s)
	I0116 04:07:02.089051 2421826 node_ready.go:35] waiting up to 6m0s for node "addons-775662" to be "Ready" ...
	I0116 04:07:02.139108 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 04:07:02.183442 2421826 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 04:07:02.183513 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 04:07:02.342455 2421826 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 04:07:02.342526 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 04:07:02.517292 2421826 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 04:07:02.517363 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 04:07:02.569526 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 04:07:04.303310 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:06.433043 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.625704801s)
	I0116 04:07:06.433146 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.55367668s)
	I0116 04:07:06.433196 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.545288856s)
	I0116 04:07:06.433239 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.524630992s)
	I0116 04:07:06.433474 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.081417984s)
	I0116 04:07:06.433658 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.033384849s)
	I0116 04:07:06.433706 2421826 addons.go:470] Verifying addon metrics-server=true in "addons-775662"
	I0116 04:07:06.433753 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.991909947s)
	I0116 04:07:06.433785 2421826 addons.go:470] Verifying addon registry=true in "addons-775662"
	I0116 04:07:06.435654 2421826 out.go:177] * Verifying registry addon...
	I0116 04:07:06.432963 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.664669432s)
	I0116 04:07:06.434084 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.699962557s)
	I0116 04:07:06.434167 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.77823634s)
	I0116 04:07:06.434209 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.402299374s)
	I0116 04:07:06.434281 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.295103456s)
	I0116 04:07:06.437567 2421826 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-775662 service yakd-dashboard -n yakd-dashboard
	
	I0116 04:07:06.435851 2421826 addons.go:470] Verifying addon ingress=true in "addons-775662"
	W0116 04:07:06.435980 2421826 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 04:07:06.438552 2421826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 04:07:06.442505 2421826 out.go:177] * Verifying ingress addon...
	I0116 04:07:06.440508 2421826 retry.go:31] will retry after 131.534014ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 04:07:06.445085 2421826 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 04:07:06.453926 2421826 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 04:07:06.454020 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:06.461707 2421826 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 04:07:06.461775 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0116 04:07:06.463764 2421826 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0116 04:07:06.576179 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 04:07:06.595223 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:06.765645 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.19601404s)
	I0116 04:07:06.765715 2421826 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-775662"
	I0116 04:07:06.767991 2421826 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 04:07:06.770676 2421826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 04:07:06.794629 2421826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 04:07:06.794708 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:07:06.800336 2421826 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 04:07:06.800418 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:06.821141 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:06.967166 2421826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 04:07:06.977290 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:06.978127 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:07.012628 2421826 addons.go:234] Setting addon gcp-auth=true in "addons-775662"
	I0116 04:07:07.012719 2421826 host.go:66] Checking if "addons-775662" exists ...
	I0116 04:07:07.013257 2421826 cli_runner.go:164] Run: docker container inspect addons-775662 --format={{.State.Status}}
	I0116 04:07:07.042900 2421826 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 04:07:07.042952 2421826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-775662
	I0116 04:07:07.076411 2421826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35316 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/addons-775662/id_rsa Username:docker}
	I0116 04:07:07.277514 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:07.444522 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:07.450581 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:07.762843 2421826 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 04:07:07.759270 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.183010505s)
	I0116 04:07:07.766538 2421826 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 04:07:07.768854 2421826 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 04:07:07.768883 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 04:07:07.776094 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:07.795986 2421826 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 04:07:07.796014 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0116 04:07:07.821287 2421826 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 04:07:07.821314 2421826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 04:07:07.849895 2421826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 04:07:07.947118 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:07.955957 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:08.276521 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:08.445492 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:08.450243 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:08.776802 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:08.972523 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:08.976210 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:09.134241 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:09.300937 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:09.312244 2421826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.462307919s)
	I0116 04:07:09.314932 2421826 addons.go:470] Verifying addon gcp-auth=true in "addons-775662"
	I0116 04:07:09.317004 2421826 out.go:177] * Verifying gcp-auth addon...
	I0116 04:07:09.319736 2421826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 04:07:09.437787 2421826 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 04:07:09.437810 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:09.476692 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:09.489016 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:09.775983 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:09.823889 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:09.951794 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:09.957222 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:10.277119 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:10.325068 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:10.444728 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:10.454766 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:10.775861 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:10.823713 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:10.945138 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:10.949306 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:11.276256 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:11.324036 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:11.444983 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:11.450526 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:11.593822 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:11.775990 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:11.823932 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:11.945235 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:11.951102 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:12.275739 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:12.323422 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:12.448168 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:12.454665 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:12.776250 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:12.824884 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:12.944842 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:12.949979 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:13.275586 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:13.324269 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:13.445243 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:13.449725 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:13.775769 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:13.823747 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:13.944604 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:13.950355 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:14.092832 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:14.276699 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:14.323657 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:14.444896 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:14.450347 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:14.775664 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:14.824664 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:14.959911 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:14.960948 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:15.281742 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:15.324797 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:15.449313 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:15.450394 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:15.776312 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:15.823938 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:15.945334 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:15.949313 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:16.093253 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:16.276425 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:16.324266 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:16.444633 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:16.449624 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:16.775594 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:16.823601 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:16.944396 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:16.949278 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:17.276065 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:17.325068 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:17.444170 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:17.449442 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:17.775617 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:17.823652 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:17.944438 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:17.949253 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:18.275325 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:18.323923 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:18.445106 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:18.448732 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:18.593082 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:18.775824 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:18.823298 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:18.944258 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:18.949123 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:19.275578 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:19.323352 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:19.444776 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:19.449407 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:19.775448 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:19.824034 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:19.944534 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:19.949006 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:20.276594 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:20.324148 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:20.444409 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:20.449244 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:20.593403 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:20.776169 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:20.824168 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:20.945331 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:20.949087 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:21.276017 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:21.323869 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:21.444813 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:21.449709 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:21.775728 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:21.823020 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:21.944096 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:21.949316 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:22.275965 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:22.323748 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:22.444987 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:22.449869 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:22.775115 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:22.825341 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:22.944600 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:22.959503 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:23.092604 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:23.275970 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:23.323379 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:23.446609 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:23.452852 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:23.776228 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:23.824548 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:23.944265 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:23.950651 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:24.275400 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:24.324084 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:24.444932 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:24.449916 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:24.775681 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:24.823677 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:24.944292 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:24.949175 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:25.093403 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:25.275747 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:25.323540 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:25.450222 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:25.451688 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:25.776084 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:25.823860 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:25.945137 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:25.948951 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:26.275513 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:26.323675 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:26.444430 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:26.449029 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:26.775681 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:26.823471 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:26.944080 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:26.949050 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:27.275854 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:27.323911 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:27.444935 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:27.449559 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:27.592934 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:27.775223 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:27.823854 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:27.944825 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:27.949770 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:28.275927 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:28.323330 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:28.444942 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:28.449322 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:28.775134 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:28.823441 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:28.944405 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:28.951008 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:29.275823 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:29.323869 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:29.444896 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:29.449579 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:29.593245 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:29.777243 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:29.824329 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:29.944745 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:29.949128 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:30.276663 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:30.323797 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:30.445448 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:30.449847 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:30.776080 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:30.823584 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:30.947865 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:30.951704 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:31.276085 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:31.323728 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:31.444890 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:31.452075 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:31.775792 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:31.823183 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:31.944517 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:31.949184 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:32.092713 2421826 node_ready.go:58] node "addons-775662" has status "Ready":"False"
	I0116 04:07:32.275892 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:32.323543 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:32.444254 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:32.449329 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:32.632067 2421826 node_ready.go:49] node "addons-775662" has status "Ready":"True"
	I0116 04:07:32.632100 2421826 node_ready.go:38] duration metric: took 30.542978077s waiting for node "addons-775662" to be "Ready" ...
	I0116 04:07:32.632115 2421826 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 04:07:32.656794 2421826 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bcghj" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:32.780161 2421826 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 04:07:32.780188 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:32.903211 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:32.967521 2421826 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 04:07:32.967547 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:32.982017 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:33.284143 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:33.359156 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:33.488269 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:33.523502 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:33.791052 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:33.825482 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:33.948448 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:33.961400 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:34.277327 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:34.324572 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:34.447712 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:34.454196 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:34.668168 2421826 pod_ready.go:102] pod "coredns-5dd5756b68-bcghj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:34.823158 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:34.884561 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:34.971267 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:34.971910 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:35.163479 2421826 pod_ready.go:92] pod "coredns-5dd5756b68-bcghj" in "kube-system" namespace has status "Ready":"True"
	I0116 04:07:35.163504 2421826 pod_ready.go:81] duration metric: took 2.506670758s waiting for pod "coredns-5dd5756b68-bcghj" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.163516 2421826 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j2nn4" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.174164 2421826 pod_ready.go:92] pod "coredns-5dd5756b68-j2nn4" in "kube-system" namespace has status "Ready":"True"
	I0116 04:07:35.174192 2421826 pod_ready.go:81] duration metric: took 10.667159ms waiting for pod "coredns-5dd5756b68-j2nn4" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.174216 2421826 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-775662" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.182571 2421826 pod_ready.go:92] pod "etcd-addons-775662" in "kube-system" namespace has status "Ready":"True"
	I0116 04:07:35.182596 2421826 pod_ready.go:81] duration metric: took 8.371438ms waiting for pod "etcd-addons-775662" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.182610 2421826 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-775662" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.196291 2421826 pod_ready.go:92] pod "kube-apiserver-addons-775662" in "kube-system" namespace has status "Ready":"True"
	I0116 04:07:35.196318 2421826 pod_ready.go:81] duration metric: took 13.699591ms waiting for pod "kube-apiserver-addons-775662" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.196331 2421826 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-775662" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.206925 2421826 pod_ready.go:92] pod "kube-controller-manager-addons-775662" in "kube-system" namespace has status "Ready":"True"
	I0116 04:07:35.206952 2421826 pod_ready.go:81] duration metric: took 10.612268ms waiting for pod "kube-controller-manager-addons-775662" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.206967 2421826 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rkmnb" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.276989 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:35.323867 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:35.461449 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:35.465602 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:35.561676 2421826 pod_ready.go:92] pod "kube-proxy-rkmnb" in "kube-system" namespace has status "Ready":"True"
	I0116 04:07:35.561754 2421826 pod_ready.go:81] duration metric: took 354.777418ms waiting for pod "kube-proxy-rkmnb" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.561782 2421826 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-775662" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.781031 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:35.824120 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:35.946006 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:35.950863 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:35.961647 2421826 pod_ready.go:92] pod "kube-scheduler-addons-775662" in "kube-system" namespace has status "Ready":"True"
	I0116 04:07:35.961725 2421826 pod_ready.go:81] duration metric: took 399.898267ms waiting for pod "kube-scheduler-addons-775662" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:35.961751 2421826 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace to be "Ready" ...
	I0116 04:07:36.277984 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:36.327612 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:36.445267 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:36.450160 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:36.778723 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:36.828996 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:36.945425 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:36.954466 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:37.276959 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:37.323290 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:37.445659 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:37.449834 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:37.776533 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:37.825433 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:37.947499 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:37.950734 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:37.977375 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:38.280976 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:38.325047 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:38.459716 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:38.464694 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:38.777535 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:38.824273 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:38.945998 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:38.950576 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:39.276818 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:39.323794 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:39.446134 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:39.450494 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:39.778565 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:39.824508 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:39.948447 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:39.958565 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:39.978724 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:40.290864 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:40.348307 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:40.460687 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:40.488489 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:40.777508 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:40.823632 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:40.945723 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:40.949257 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:41.276740 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:41.323924 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:41.445343 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:41.449418 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:41.778330 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:41.823977 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:41.945165 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:41.949393 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:42.277438 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:42.325044 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:42.445165 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:42.449482 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:42.469505 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:42.777134 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:42.824384 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:42.955274 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:42.957026 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:43.278904 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:43.324667 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:43.449209 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:43.458691 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:43.779551 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:43.825487 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:43.947477 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:43.971305 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:44.277246 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:44.324330 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:44.447523 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:44.452322 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:44.471909 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:44.778282 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:44.827973 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:44.946053 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:44.951672 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:45.285439 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:45.325823 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:45.448666 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:45.454354 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:45.777078 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:45.824317 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:45.945935 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:45.953797 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:46.276904 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:46.323811 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:46.447564 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:46.450362 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:46.776983 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:46.823550 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:46.944910 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:46.949903 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:46.968041 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:47.277271 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:47.324657 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:47.447219 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:47.451583 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:47.778331 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:47.825804 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:47.946352 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:47.951984 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:48.288600 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:48.324945 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:48.446287 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:48.450737 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:48.779459 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:48.824131 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:48.944692 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:48.951955 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:48.972003 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:49.277580 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:49.325306 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:49.449759 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:49.452553 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:49.776471 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:49.824336 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:49.946544 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:49.949983 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:50.276553 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:50.324334 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:50.451310 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:50.457151 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:50.778570 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:50.824587 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:50.961786 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:50.978981 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:50.984891 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:51.278840 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:51.323806 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:51.449820 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:51.453400 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:51.777758 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:51.825455 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:51.952563 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:51.958010 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:52.277090 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:52.327063 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:52.444926 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:52.449727 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:52.777739 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:52.825288 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:52.946841 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:52.955446 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:53.277310 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:53.324156 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:53.445296 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:53.450322 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:53.474209 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:53.781773 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:53.823540 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:53.945455 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:53.949289 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:54.276478 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:54.324323 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:54.445301 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:54.449470 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:54.780726 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:54.824779 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:54.946283 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:54.950565 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:55.278341 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:55.324306 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:55.450609 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:55.457162 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:55.779245 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:55.823925 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:55.947496 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:55.953743 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:55.968920 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:56.278104 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:56.324183 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:56.445918 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:56.450413 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:56.779920 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:56.823946 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:56.946685 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:56.959519 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:57.284214 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:57.326434 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:57.446163 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:57.459683 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:57.779156 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:57.825508 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:57.957790 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:57.957905 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:57.972144 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:07:58.277459 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:58.324585 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:58.461642 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:58.487291 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:58.777407 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:58.824159 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:58.945631 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:58.949617 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:59.277297 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:59.323574 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:07:59.451982 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:07:59.455629 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:07:59.784187 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:07:59.826006 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:00.003087 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:00.128449 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:00.168693 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:08:00.321919 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:00.329700 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:00.448064 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:00.451366 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:00.785728 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:00.826797 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:00.945246 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:00.949856 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:01.277433 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:01.324343 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:01.445688 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:01.449770 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:01.776949 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:01.824384 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:01.945329 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:01.949538 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:02.291613 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:02.324277 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:02.448602 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:02.451485 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:02.473839 2421826 pod_ready.go:102] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"False"
	I0116 04:08:02.778685 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:02.824499 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:02.945128 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:02.951358 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:03.310464 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:03.328163 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:03.446192 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:03.449143 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:03.469175 2421826 pod_ready.go:92] pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace has status "Ready":"True"
	I0116 04:08:03.469204 2421826 pod_ready.go:81] duration metric: took 27.507431359s waiting for pod "metrics-server-7c66d45ddc-dtqwj" in "kube-system" namespace to be "Ready" ...
	I0116 04:08:03.469217 2421826 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gb8vg" in "kube-system" namespace to be "Ready" ...
	I0116 04:08:03.474891 2421826 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-gb8vg" in "kube-system" namespace has status "Ready":"True"
	I0116 04:08:03.474918 2421826 pod_ready.go:81] duration metric: took 5.692376ms waiting for pod "nvidia-device-plugin-daemonset-gb8vg" in "kube-system" namespace to be "Ready" ...
	I0116 04:08:03.474958 2421826 pod_ready.go:38] duration metric: took 30.842826563s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 04:08:03.474980 2421826 api_server.go:52] waiting for apiserver process to appear ...
	I0116 04:08:03.475011 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 04:08:03.475099 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 04:08:03.544377 2421826 cri.go:89] found id: "1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52"
	I0116 04:08:03.544443 2421826 cri.go:89] found id: ""
	I0116 04:08:03.544465 2421826 logs.go:284] 1 containers: [1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52]
	I0116 04:08:03.544544 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:03.549104 2421826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 04:08:03.549203 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 04:08:03.602636 2421826 cri.go:89] found id: "36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be"
	I0116 04:08:03.602658 2421826 cri.go:89] found id: ""
	I0116 04:08:03.602666 2421826 logs.go:284] 1 containers: [36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be]
	I0116 04:08:03.602741 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:03.607315 2421826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 04:08:03.607386 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 04:08:03.663181 2421826 cri.go:89] found id: "eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5"
	I0116 04:08:03.663204 2421826 cri.go:89] found id: "5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60"
	I0116 04:08:03.663210 2421826 cri.go:89] found id: ""
	I0116 04:08:03.663217 2421826 logs.go:284] 2 containers: [eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5 5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60]
	I0116 04:08:03.663276 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:03.668063 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:03.672710 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 04:08:03.672833 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 04:08:03.724687 2421826 cri.go:89] found id: "5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5"
	I0116 04:08:03.724775 2421826 cri.go:89] found id: ""
	I0116 04:08:03.724798 2421826 logs.go:284] 1 containers: [5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5]
	I0116 04:08:03.724877 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:03.730151 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 04:08:03.730265 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 04:08:03.778380 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:03.784008 2421826 cri.go:89] found id: "31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e"
	I0116 04:08:03.784068 2421826 cri.go:89] found id: ""
	I0116 04:08:03.784091 2421826 logs.go:284] 1 containers: [31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e]
	I0116 04:08:03.784179 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:03.790493 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 04:08:03.790609 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 04:08:03.824598 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:03.835516 2421826 cri.go:89] found id: "3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1"
	I0116 04:08:03.835595 2421826 cri.go:89] found id: ""
	I0116 04:08:03.835610 2421826 logs.go:284] 1 containers: [3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1]
	I0116 04:08:03.835692 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:03.840449 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 04:08:03.840519 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 04:08:03.887899 2421826 cri.go:89] found id: "1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493"
	I0116 04:08:03.887923 2421826 cri.go:89] found id: ""
	I0116 04:08:03.887931 2421826 logs.go:284] 1 containers: [1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493]
	I0116 04:08:03.887993 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:03.893076 2421826 logs.go:123] Gathering logs for coredns [eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5] ...
	I0116 04:08:03.893100 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5"
	I0116 04:08:03.947175 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:03.952019 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:03.955235 2421826 logs.go:123] Gathering logs for kindnet [1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493] ...
	I0116 04:08:03.955266 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493"
	I0116 04:08:04.014495 2421826 logs.go:123] Gathering logs for CRI-O ...
	I0116 04:08:04.014523 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 04:08:04.127326 2421826 logs.go:123] Gathering logs for describe nodes ...
	I0116 04:08:04.127402 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 04:08:04.336334 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:04.348030 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:04.453847 2421826 logs.go:123] Gathering logs for kube-apiserver [1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52] ...
	I0116 04:08:04.453882 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52"
	I0116 04:08:04.476716 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:04.483101 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:04.675467 2421826 logs.go:123] Gathering logs for etcd [36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be] ...
	I0116 04:08:04.675544 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be"
	I0116 04:08:04.778217 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:04.823966 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:04.950367 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:04.952985 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:04.968849 2421826 logs.go:123] Gathering logs for kubelet ...
	I0116 04:08:04.968938 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 04:08:05.092325 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:05 addons-775662 kubelet[1352]: W0116 04:07:05.970100    1352 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.092620 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:05 addons-775662 kubelet[1352]: E0116 04:07:05.970170    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.103242 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.595964    1352 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.103531 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596018    1352 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.103738 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.596082    1352 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.103975 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596104    1352 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.104168 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.596197    1352 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.104391 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596212    1352 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.106408 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628300    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.106642 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628343    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.106862 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628642    1352 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.107082 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628670    1352 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	I0116 04:08:05.135269 2421826 logs.go:123] Gathering logs for coredns [5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60] ...
	I0116 04:08:05.135345 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60"
	I0116 04:08:05.209259 2421826 logs.go:123] Gathering logs for kube-scheduler [5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5] ...
	I0116 04:08:05.209325 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5"
	I0116 04:08:05.281676 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:05.303004 2421826 logs.go:123] Gathering logs for kube-proxy [31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e] ...
	I0116 04:08:05.303045 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e"
	I0116 04:08:05.324619 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:05.355805 2421826 logs.go:123] Gathering logs for kube-controller-manager [3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1] ...
	I0116 04:08:05.355840 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1"
	I0116 04:08:05.454373 2421826 logs.go:123] Gathering logs for container status ...
	I0116 04:08:05.454413 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 04:08:05.460135 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:05.461324 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:05.576296 2421826 logs.go:123] Gathering logs for dmesg ...
	I0116 04:08:05.576331 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 04:08:05.601889 2421826 out.go:309] Setting ErrFile to fd 2...
	I0116 04:08:05.601925 2421826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 04:08:05.601982 2421826 out.go:239] X Problems detected in kubelet:
	W0116 04:08:05.602001 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596212    1352 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.602014 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628300    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.602023 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628343    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.602030 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628642    1352 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:05.602039 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628670    1352 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	I0116 04:08:05.602046 2421826 out.go:309] Setting ErrFile to fd 2...
	I0116 04:08:05.602054 2421826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:08:05.779737 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:05.825184 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:05.951928 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:05.957354 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:06.279750 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:06.324469 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:06.446163 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:06.456223 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:06.780330 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:06.824303 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:06.945658 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:06.953096 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:07.281912 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:07.324278 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:07.448999 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:07.454252 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:07.777853 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:07.828511 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:07.948348 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 04:08:07.951854 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:08.276740 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:08.331649 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:08.445976 2421826 kapi.go:107] duration metric: took 1m2.007420838s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 04:08:08.449521 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:08.776435 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:08.824081 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:08.951186 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:09.277295 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:09.329634 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:09.450774 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:09.777792 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:09.825347 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:09.950006 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:10.276550 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:10.324148 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:10.452662 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:10.777049 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:10.823286 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:10.949765 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:11.277464 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:11.330368 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:11.449880 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:11.778316 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:11.824302 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:11.950040 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:12.276922 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:12.324630 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:12.458356 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:12.776496 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:12.824220 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:12.950638 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:13.280811 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:13.324098 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:13.459263 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:13.778448 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:13.824329 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:13.950520 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:14.277366 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:14.324145 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:14.452587 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:14.779897 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:14.824588 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:14.951413 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:15.278905 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:15.323764 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:15.455030 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:15.603503 2421826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 04:08:15.624492 2421826 api_server.go:72] duration metric: took 1m15.783472313s to wait for apiserver process to appear ...
	I0116 04:08:15.624517 2421826 api_server.go:88] waiting for apiserver healthz status ...
	I0116 04:08:15.624551 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 04:08:15.624610 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 04:08:15.676972 2421826 cri.go:89] found id: "1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52"
	I0116 04:08:15.676993 2421826 cri.go:89] found id: ""
	I0116 04:08:15.677001 2421826 logs.go:284] 1 containers: [1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52]
	I0116 04:08:15.677058 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:15.682634 2421826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 04:08:15.682702 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 04:08:15.739019 2421826 cri.go:89] found id: "36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be"
	I0116 04:08:15.739042 2421826 cri.go:89] found id: ""
	I0116 04:08:15.739050 2421826 logs.go:284] 1 containers: [36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be]
	I0116 04:08:15.739103 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:15.746244 2421826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 04:08:15.746311 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 04:08:15.777700 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:15.804966 2421826 cri.go:89] found id: "eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5"
	I0116 04:08:15.804990 2421826 cri.go:89] found id: "5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60"
	I0116 04:08:15.804996 2421826 cri.go:89] found id: ""
	I0116 04:08:15.805004 2421826 logs.go:284] 2 containers: [eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5 5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60]
	I0116 04:08:15.805057 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:15.809705 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:15.815557 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 04:08:15.815626 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 04:08:15.824061 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:15.872660 2421826 cri.go:89] found id: "5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5"
	I0116 04:08:15.872729 2421826 cri.go:89] found id: ""
	I0116 04:08:15.872788 2421826 logs.go:284] 1 containers: [5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5]
	I0116 04:08:15.872883 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:15.878322 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 04:08:15.878463 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 04:08:15.939537 2421826 cri.go:89] found id: "31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e"
	I0116 04:08:15.939613 2421826 cri.go:89] found id: ""
	I0116 04:08:15.939635 2421826 logs.go:284] 1 containers: [31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e]
	I0116 04:08:15.939724 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:15.944899 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 04:08:15.945028 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 04:08:15.952201 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:16.009686 2421826 cri.go:89] found id: "3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1"
	I0116 04:08:16.009713 2421826 cri.go:89] found id: ""
	I0116 04:08:16.009721 2421826 logs.go:284] 1 containers: [3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1]
	I0116 04:08:16.009781 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:16.016498 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 04:08:16.016572 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 04:08:16.080650 2421826 cri.go:89] found id: "1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493"
	I0116 04:08:16.080675 2421826 cri.go:89] found id: ""
	I0116 04:08:16.080684 2421826 logs.go:284] 1 containers: [1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493]
	I0116 04:08:16.080737 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:16.088648 2421826 logs.go:123] Gathering logs for describe nodes ...
	I0116 04:08:16.088676 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 04:08:16.277510 2421826 logs.go:123] Gathering logs for coredns [eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5] ...
	I0116 04:08:16.277541 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5"
	I0116 04:08:16.281313 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:16.323508 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:16.350176 2421826 logs.go:123] Gathering logs for kube-scheduler [5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5] ...
	I0116 04:08:16.350205 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5"
	I0116 04:08:16.419851 2421826 logs.go:123] Gathering logs for kube-proxy [31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e] ...
	I0116 04:08:16.419894 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e"
	I0116 04:08:16.450718 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:16.475557 2421826 logs.go:123] Gathering logs for CRI-O ...
	I0116 04:08:16.475636 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 04:08:16.572090 2421826 logs.go:123] Gathering logs for dmesg ...
	I0116 04:08:16.572126 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 04:08:16.605348 2421826 logs.go:123] Gathering logs for container status ...
	I0116 04:08:16.605379 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 04:08:16.687828 2421826 logs.go:123] Gathering logs for kubelet ...
	I0116 04:08:16.687858 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 04:08:16.771052 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:05 addons-775662 kubelet[1352]: W0116 04:07:05.970100    1352 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.771322 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:05 addons-775662 kubelet[1352]: E0116 04:07:05.970170    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-775662' and this object
	I0116 04:08:16.778196 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0116 04:08:16.779314 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.595964    1352 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.779559 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596018    1352 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.779772 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.596082    1352 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.779994 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596104    1352 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.780201 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.596197    1352 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.780415 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596212    1352 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.782400 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628300    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.782628 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628343    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.782832 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628642    1352 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:16.783056 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628670    1352 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	I0116 04:08:16.817581 2421826 logs.go:123] Gathering logs for etcd [36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be] ...
	I0116 04:08:16.817659 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be"
	I0116 04:08:16.823766 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:16.881214 2421826 logs.go:123] Gathering logs for kube-controller-manager [3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1] ...
	I0116 04:08:16.881251 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1"
	I0116 04:08:16.951844 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:16.957544 2421826 logs.go:123] Gathering logs for kube-apiserver [1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52] ...
	I0116 04:08:16.957579 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52"
	I0116 04:08:17.018198 2421826 logs.go:123] Gathering logs for coredns [5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60] ...
	I0116 04:08:17.018238 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60"
	I0116 04:08:17.066106 2421826 logs.go:123] Gathering logs for kindnet [1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493] ...
	I0116 04:08:17.066136 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493"
	I0116 04:08:17.124330 2421826 out.go:309] Setting ErrFile to fd 2...
	I0116 04:08:17.124358 2421826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 04:08:17.124445 2421826 out.go:239] X Problems detected in kubelet:
	W0116 04:08:17.124462 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596212    1352 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:17.124596 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628300    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:17.124633 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628343    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:17.124648 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628642    1352 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:17.124656 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628670    1352 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	I0116 04:08:17.124665 2421826 out.go:309] Setting ErrFile to fd 2...
	I0116 04:08:17.124730 2421826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:08:17.284038 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:17.327089 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:17.450273 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:17.777845 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:17.825696 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:17.950215 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:18.281084 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:18.323611 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:18.450941 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:18.777769 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:18.826203 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:18.950937 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:19.279791 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:19.323550 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:19.460622 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:19.794721 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:19.823728 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:19.951713 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:20.278902 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:20.324529 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:20.467789 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:20.778026 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:20.824279 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:20.950438 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:21.277566 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:21.331762 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:21.461207 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:21.777571 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:21.823729 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:21.949879 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:22.276840 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:22.323274 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:22.450622 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:22.776555 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:22.824152 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:22.949807 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:23.280204 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:23.330817 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:23.450249 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:23.778808 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:23.830919 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:23.950257 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:24.283078 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:24.323318 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:24.450676 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:24.776382 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:24.823805 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:24.950507 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:25.277654 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:25.326430 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:25.452352 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:25.777469 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:25.825050 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:25.952400 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:26.277028 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 04:08:26.324315 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:26.451174 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:26.777416 2421826 kapi.go:107] duration metric: took 1m20.0067385s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 04:08:26.823184 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:26.950250 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:27.126417 2421826 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0116 04:08:27.139878 2421826 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0116 04:08:27.141308 2421826 api_server.go:141] control plane version: v1.28.4
	I0116 04:08:27.141336 2421826 api_server.go:131] duration metric: took 11.516810067s to wait for apiserver health ...
	I0116 04:08:27.141345 2421826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 04:08:27.141369 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0116 04:08:27.141434 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0116 04:08:27.192025 2421826 cri.go:89] found id: "1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52"
	I0116 04:08:27.192047 2421826 cri.go:89] found id: ""
	I0116 04:08:27.192056 2421826 logs.go:284] 1 containers: [1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52]
	I0116 04:08:27.192113 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:27.197117 2421826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0116 04:08:27.197220 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0116 04:08:27.244716 2421826 cri.go:89] found id: "36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be"
	I0116 04:08:27.244741 2421826 cri.go:89] found id: ""
	I0116 04:08:27.244775 2421826 logs.go:284] 1 containers: [36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be]
	I0116 04:08:27.244836 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:27.249684 2421826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0116 04:08:27.249759 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0116 04:08:27.296550 2421826 cri.go:89] found id: "eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5"
	I0116 04:08:27.296626 2421826 cri.go:89] found id: "5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60"
	I0116 04:08:27.296647 2421826 cri.go:89] found id: ""
	I0116 04:08:27.296677 2421826 logs.go:284] 2 containers: [eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5 5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60]
	I0116 04:08:27.296780 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:27.301966 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:27.306684 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0116 04:08:27.306793 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0116 04:08:27.324488 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:27.355558 2421826 cri.go:89] found id: "5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5"
	I0116 04:08:27.355626 2421826 cri.go:89] found id: ""
	I0116 04:08:27.355648 2421826 logs.go:284] 1 containers: [5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5]
	I0116 04:08:27.355729 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:27.360896 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0116 04:08:27.361023 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0116 04:08:27.405603 2421826 cri.go:89] found id: "31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e"
	I0116 04:08:27.405641 2421826 cri.go:89] found id: ""
	I0116 04:08:27.405650 2421826 logs.go:284] 1 containers: [31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e]
	I0116 04:08:27.405715 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:27.410983 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0116 04:08:27.411086 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0116 04:08:27.451665 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:27.464321 2421826 cri.go:89] found id: "3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1"
	I0116 04:08:27.464384 2421826 cri.go:89] found id: ""
	I0116 04:08:27.464406 2421826 logs.go:284] 1 containers: [3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1]
	I0116 04:08:27.464504 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:27.469172 2421826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0116 04:08:27.469285 2421826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0116 04:08:27.513203 2421826 cri.go:89] found id: "1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493"
	I0116 04:08:27.513235 2421826 cri.go:89] found id: ""
	I0116 04:08:27.513244 2421826 logs.go:284] 1 containers: [1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493]
	I0116 04:08:27.513305 2421826 ssh_runner.go:195] Run: which crictl
	I0116 04:08:27.518239 2421826 logs.go:123] Gathering logs for kubelet ...
	I0116 04:08:27.518266 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0116 04:08:27.576572 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:05 addons-775662 kubelet[1352]: W0116 04:07:05.970100    1352 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.576834 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:05 addons-775662 kubelet[1352]: E0116 04:07:05.970170    1352 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.583395 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.595964    1352 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.583611 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596018    1352 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.583799 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.596082    1352 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.584006 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596104    1352 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.584175 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.596197    1352 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.584361 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596212    1352 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.586401 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628300    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.586611 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628343    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.586795 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628642    1352 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:27.586991 2421826 logs.go:138] Found kubelet problem: Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628670    1352 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	I0116 04:08:27.623298 2421826 logs.go:123] Gathering logs for dmesg ...
	I0116 04:08:27.623326 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0116 04:08:27.645372 2421826 logs.go:123] Gathering logs for coredns [5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60] ...
	I0116 04:08:27.645401 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60"
	I0116 04:08:27.687994 2421826 logs.go:123] Gathering logs for kube-scheduler [5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5] ...
	I0116 04:08:27.688023 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5"
	I0116 04:08:27.783214 2421826 logs.go:123] Gathering logs for kindnet [1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493] ...
	I0116 04:08:27.783252 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493"
	I0116 04:08:27.824386 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:27.836301 2421826 logs.go:123] Gathering logs for container status ...
	I0116 04:08:27.836330 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0116 04:08:27.894256 2421826 logs.go:123] Gathering logs for kube-apiserver [1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52] ...
	I0116 04:08:27.894289 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52"
	I0116 04:08:27.950465 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:27.973160 2421826 logs.go:123] Gathering logs for coredns [eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5] ...
	I0116 04:08:27.973198 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5"
	I0116 04:08:28.045768 2421826 logs.go:123] Gathering logs for kube-proxy [31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e] ...
	I0116 04:08:28.045808 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e"
	I0116 04:08:28.097226 2421826 logs.go:123] Gathering logs for CRI-O ...
	I0116 04:08:28.097254 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0116 04:08:28.188110 2421826 logs.go:123] Gathering logs for describe nodes ...
	I0116 04:08:28.188144 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0116 04:08:28.324402 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:28.340550 2421826 logs.go:123] Gathering logs for etcd [36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be] ...
	I0116 04:08:28.340578 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be"
	I0116 04:08:28.391552 2421826 logs.go:123] Gathering logs for kube-controller-manager [3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1] ...
	I0116 04:08:28.391580 2421826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1"
	I0116 04:08:28.454510 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:28.475706 2421826 out.go:309] Setting ErrFile to fd 2...
	I0116 04:08:28.475747 2421826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 04:08:28.475813 2421826 out.go:239] X Problems detected in kubelet:
	W0116 04:08:28.475828 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.596212    1352 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-775662' and this object
	W0116 04:08:28.475843 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628300    1352 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:28.475853 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628343    1352 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-775662" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:28.475874 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: W0116 04:07:32.628642    1352 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	W0116 04:08:28.475884 2421826 out.go:239]   Jan 16 04:07:32 addons-775662 kubelet[1352]: E0116 04:07:32.628670    1352 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-775662" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-775662' and this object
	I0116 04:08:28.475893 2421826 out.go:309] Setting ErrFile to fd 2...
	I0116 04:08:28.475904 2421826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:08:28.823361 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:28.951002 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:29.324088 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:29.450407 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:29.824241 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:29.950585 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:30.323735 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:30.450441 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:30.822988 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:30.949437 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:31.323305 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:31.450060 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:31.824064 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:31.950489 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:32.323789 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:32.449930 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:32.824118 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:32.949975 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:33.323889 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:33.450927 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:33.823715 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:33.950447 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:34.323336 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:34.450199 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:34.824155 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:34.950208 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:35.323787 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:35.450277 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:35.824008 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:35.950294 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:36.323442 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:36.450099 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:36.823917 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:36.950737 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:37.323624 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:37.450436 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:37.823875 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:37.957314 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:38.327641 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:38.455699 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:38.495617 2421826 system_pods.go:59] 19 kube-system pods found
	I0116 04:08:38.495657 2421826 system_pods.go:61] "coredns-5dd5756b68-bcghj" [1624e1f9-25dc-4d13-86fa-e24c3d6bb398] Running
	I0116 04:08:38.495664 2421826 system_pods.go:61] "coredns-5dd5756b68-j2nn4" [b2e51c99-2b09-48ac-8bb8-0c5d8828b750] Running
	I0116 04:08:38.495670 2421826 system_pods.go:61] "csi-hostpath-attacher-0" [42330aea-ccd9-4f95-bc63-80011ed70673] Running
	I0116 04:08:38.495675 2421826 system_pods.go:61] "csi-hostpath-resizer-0" [c629c1ab-e583-4d6b-9166-c3acbfbb44a1] Running
	I0116 04:08:38.495680 2421826 system_pods.go:61] "csi-hostpathplugin-4lvqq" [5069e170-8653-4c69-8c30-e73c7f876d32] Running
	I0116 04:08:38.495687 2421826 system_pods.go:61] "etcd-addons-775662" [2e54a078-e38b-46ff-99f2-686c74137ab8] Running
	I0116 04:08:38.495692 2421826 system_pods.go:61] "kindnet-gllrn" [4eab4de6-e95d-4af3-906e-b10cdee9d354] Running
	I0116 04:08:38.495697 2421826 system_pods.go:61] "kube-apiserver-addons-775662" [eea8f97d-c596-41f1-93ec-7cfc90e4bb48] Running
	I0116 04:08:38.495703 2421826 system_pods.go:61] "kube-controller-manager-addons-775662" [de27d553-6bcf-43be-b96c-dcc46c208331] Running
	I0116 04:08:38.495715 2421826 system_pods.go:61] "kube-ingress-dns-minikube" [73efc9e5-d9e9-4a77-8f6d-1657ed39093f] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 04:08:38.495722 2421826 system_pods.go:61] "kube-proxy-rkmnb" [3b827a95-5b31-4f71-bb84-b1207ae094cd] Running
	I0116 04:08:38.495732 2421826 system_pods.go:61] "kube-scheduler-addons-775662" [23d13226-5979-4a47-9d3c-52b24fbfadb1] Running
	I0116 04:08:38.495737 2421826 system_pods.go:61] "metrics-server-7c66d45ddc-dtqwj" [d8578bd2-418d-4fcd-ac58-07a6188c73ed] Running
	I0116 04:08:38.495743 2421826 system_pods.go:61] "nvidia-device-plugin-daemonset-gb8vg" [94232484-f9f3-41aa-9cb0-026faa3e71df] Running
	I0116 04:08:38.495754 2421826 system_pods.go:61] "registry-fshp9" [21ff04cb-d00c-4e3c-ae50-b7c1be39cb71] Running
	I0116 04:08:38.495759 2421826 system_pods.go:61] "registry-proxy-ljm2x" [8d2e9319-cfe5-4357-8b2b-7e474e6d5b12] Running
	I0116 04:08:38.495764 2421826 system_pods.go:61] "snapshot-controller-58dbcc7b99-4npn7" [508a301f-40fa-45e3-bb05-e166de60f51d] Running
	I0116 04:08:38.495769 2421826 system_pods.go:61] "snapshot-controller-58dbcc7b99-xsnp2" [84634d97-7103-40d9-9583-490b90a8fdd0] Running
	I0116 04:08:38.495774 2421826 system_pods.go:61] "storage-provisioner" [9e63f79d-7ffc-4bc5-bf52-249cf0478cb8] Running
	I0116 04:08:38.495780 2421826 system_pods.go:74] duration metric: took 11.354429252s to wait for pod list to return data ...
	I0116 04:08:38.495793 2421826 default_sa.go:34] waiting for default service account to be created ...
	I0116 04:08:38.499204 2421826 default_sa.go:45] found service account: "default"
	I0116 04:08:38.499227 2421826 default_sa.go:55] duration metric: took 3.427909ms for default service account to be created ...
	I0116 04:08:38.499236 2421826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 04:08:38.518743 2421826 system_pods.go:86] 19 kube-system pods found
	I0116 04:08:38.518822 2421826 system_pods.go:89] "coredns-5dd5756b68-bcghj" [1624e1f9-25dc-4d13-86fa-e24c3d6bb398] Running
	I0116 04:08:38.518838 2421826 system_pods.go:89] "coredns-5dd5756b68-j2nn4" [b2e51c99-2b09-48ac-8bb8-0c5d8828b750] Running
	I0116 04:08:38.518845 2421826 system_pods.go:89] "csi-hostpath-attacher-0" [42330aea-ccd9-4f95-bc63-80011ed70673] Running
	I0116 04:08:38.518850 2421826 system_pods.go:89] "csi-hostpath-resizer-0" [c629c1ab-e583-4d6b-9166-c3acbfbb44a1] Running
	I0116 04:08:38.518860 2421826 system_pods.go:89] "csi-hostpathplugin-4lvqq" [5069e170-8653-4c69-8c30-e73c7f876d32] Running
	I0116 04:08:38.518865 2421826 system_pods.go:89] "etcd-addons-775662" [2e54a078-e38b-46ff-99f2-686c74137ab8] Running
	I0116 04:08:38.518871 2421826 system_pods.go:89] "kindnet-gllrn" [4eab4de6-e95d-4af3-906e-b10cdee9d354] Running
	I0116 04:08:38.518882 2421826 system_pods.go:89] "kube-apiserver-addons-775662" [eea8f97d-c596-41f1-93ec-7cfc90e4bb48] Running
	I0116 04:08:38.518888 2421826 system_pods.go:89] "kube-controller-manager-addons-775662" [de27d553-6bcf-43be-b96c-dcc46c208331] Running
	I0116 04:08:38.518896 2421826 system_pods.go:89] "kube-ingress-dns-minikube" [73efc9e5-d9e9-4a77-8f6d-1657ed39093f] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0116 04:08:38.518906 2421826 system_pods.go:89] "kube-proxy-rkmnb" [3b827a95-5b31-4f71-bb84-b1207ae094cd] Running
	I0116 04:08:38.518913 2421826 system_pods.go:89] "kube-scheduler-addons-775662" [23d13226-5979-4a47-9d3c-52b24fbfadb1] Running
	I0116 04:08:38.518919 2421826 system_pods.go:89] "metrics-server-7c66d45ddc-dtqwj" [d8578bd2-418d-4fcd-ac58-07a6188c73ed] Running
	I0116 04:08:38.518924 2421826 system_pods.go:89] "nvidia-device-plugin-daemonset-gb8vg" [94232484-f9f3-41aa-9cb0-026faa3e71df] Running
	I0116 04:08:38.518929 2421826 system_pods.go:89] "registry-fshp9" [21ff04cb-d00c-4e3c-ae50-b7c1be39cb71] Running
	I0116 04:08:38.518933 2421826 system_pods.go:89] "registry-proxy-ljm2x" [8d2e9319-cfe5-4357-8b2b-7e474e6d5b12] Running
	I0116 04:08:38.518938 2421826 system_pods.go:89] "snapshot-controller-58dbcc7b99-4npn7" [508a301f-40fa-45e3-bb05-e166de60f51d] Running
	I0116 04:08:38.518943 2421826 system_pods.go:89] "snapshot-controller-58dbcc7b99-xsnp2" [84634d97-7103-40d9-9583-490b90a8fdd0] Running
	I0116 04:08:38.518952 2421826 system_pods.go:89] "storage-provisioner" [9e63f79d-7ffc-4bc5-bf52-249cf0478cb8] Running
	I0116 04:08:38.518964 2421826 system_pods.go:126] duration metric: took 19.722896ms to wait for k8s-apps to be running ...
	I0116 04:08:38.518972 2421826 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 04:08:38.519043 2421826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:08:38.566894 2421826 system_svc.go:56] duration metric: took 47.911925ms WaitForService to wait for kubelet.
	I0116 04:08:38.566923 2421826 kubeadm.go:581] duration metric: took 1m38.725908759s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 04:08:38.566945 2421826 node_conditions.go:102] verifying NodePressure condition ...
	I0116 04:08:38.570672 2421826 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 04:08:38.570708 2421826 node_conditions.go:123] node cpu capacity is 2
	I0116 04:08:38.570722 2421826 node_conditions.go:105] duration metric: took 3.771383ms to run NodePressure ...
	I0116 04:08:38.570733 2421826 start.go:228] waiting for startup goroutines ...
	I0116 04:08:38.825351 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:38.949594 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:39.324197 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:39.451826 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:39.823344 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:39.951247 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:40.323553 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:40.453697 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:40.823983 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:40.952892 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:41.323834 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:41.452068 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:41.829041 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:41.950868 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:42.328830 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:42.450380 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:42.824327 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:42.953489 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:43.324157 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:43.450965 2421826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 04:08:43.824087 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:43.950481 2421826 kapi.go:107] duration metric: took 1m37.50539213s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 04:08:44.323429 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:44.824507 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:45.328020 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:45.824146 2421826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 04:08:46.324014 2421826 kapi.go:107] duration metric: took 1m37.004276204s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 04:08:46.325921 2421826 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-775662 cluster.
	I0116 04:08:46.327906 2421826 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 04:08:46.329747 2421826 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 04:08:46.331929 2421826 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, metrics-server, cloud-spanner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0116 04:08:46.335005 2421826 addons.go:505] enable addons completed in 1m46.780159777s: enabled=[storage-provisioner ingress-dns nvidia-device-plugin metrics-server cloud-spanner inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0116 04:08:46.335061 2421826 start.go:233] waiting for cluster config update ...
	I0116 04:08:46.335082 2421826 start.go:242] writing updated cluster config ...
	I0116 04:08:46.335401 2421826 ssh_runner.go:195] Run: rm -f paused
	I0116 04:08:46.678354 2421826 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 04:08:46.680139 2421826 out.go:177] * Done! kubectl is now configured to use "addons-775662" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 16 04:12:00 addons-775662 crio[883]: time="2024-01-16 04:12:00.873022989Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=6896ca26-825c-4dee-8750-09afdcac964f name=/runtime.v1.ImageService/ImageStatus
	Jan 16 04:12:00 addons-775662 crio[883]: time="2024-01-16 04:12:00.874416143Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=fca1c865-0c99-4b71-8e16-5b779eda3442 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 04:12:00 addons-775662 crio[883]: time="2024-01-16 04:12:00.874610001Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=fca1c865-0c99-4b71-8e16-5b779eda3442 name=/runtime.v1.ImageService/ImageStatus
	Jan 16 04:12:00 addons-775662 crio[883]: time="2024-01-16 04:12:00.875814992Z" level=info msg="Creating container: default/hello-world-app-5d77478584-xfg2x/hello-world-app" id=4f4603ca-0d17-4d32-936a-76117ea72be3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 04:12:00 addons-775662 crio[883]: time="2024-01-16 04:12:00.875915789Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 04:12:00 addons-775662 crio[883]: time="2024-01-16 04:12:00.948680178Z" level=info msg="Created container ca82e9e8e4042f4fad2e89d08d284df33f7745acfd4ba06ffd06bfe0cd86c086: default/hello-world-app-5d77478584-xfg2x/hello-world-app" id=4f4603ca-0d17-4d32-936a-76117ea72be3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 04:12:00 addons-775662 crio[883]: time="2024-01-16 04:12:00.949611576Z" level=info msg="Starting container: ca82e9e8e4042f4fad2e89d08d284df33f7745acfd4ba06ffd06bfe0cd86c086" id=9321119c-d81c-4efb-8c28-233ffb7b5a34 name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 04:12:00 addons-775662 crio[883]: time="2024-01-16 04:12:00.961329269Z" level=info msg="Started container" PID=8276 containerID=ca82e9e8e4042f4fad2e89d08d284df33f7745acfd4ba06ffd06bfe0cd86c086 description=default/hello-world-app-5d77478584-xfg2x/hello-world-app id=9321119c-d81c-4efb-8c28-233ffb7b5a34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fa9c2ba0a75800791953012ca1eb1ca741ea49fad8109c31654281af626fab7
	Jan 16 04:12:00 addons-775662 conmon[8265]: conmon ca82e9e8e4042f4fad2e <ninfo>: container 8276 exited with status 1
	Jan 16 04:12:01 addons-775662 crio[883]: time="2024-01-16 04:12:01.066007506Z" level=info msg="Removing container: 4235e705f977c5062a8dd262691b73cf699122dc97ff07ee988c8fd501a45543" id=a00cea29-8953-4b67-b8d1-fe82f4aadc9f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 16 04:12:01 addons-775662 crio[883]: time="2024-01-16 04:12:01.093488926Z" level=info msg="Removed container 4235e705f977c5062a8dd262691b73cf699122dc97ff07ee988c8fd501a45543: default/hello-world-app-5d77478584-xfg2x/hello-world-app" id=a00cea29-8953-4b67-b8d1-fe82f4aadc9f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.790509611Z" level=warning msg="Stopping container 679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=4271dcc9-b9af-4075-bbe2-3a36e07b8774 name=/runtime.v1.RuntimeService/StopContainer
	Jan 16 04:12:02 addons-775662 conmon[5403]: conmon 679cc5bfb00d54532a9f <ninfo>: container 5414 exited with status 137
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.936479002Z" level=info msg="Stopped container 679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965: ingress-nginx/ingress-nginx-controller-69cff4fd79-zt99b/controller" id=4271dcc9-b9af-4075-bbe2-3a36e07b8774 name=/runtime.v1.RuntimeService/StopContainer
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.937061379Z" level=info msg="Stopping pod sandbox: 3636ad76ca29a74430a584ef45207e3242dabac9905d633032e2b878a340c7b9" id=3a287102-8bbf-411b-a53c-83b1f114f80f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.941246381Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-TVS3O7ZDOMHERMUO - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-M6MUCKTLG6FWHRAP - [0:0]\n-X KUBE-HP-TVS3O7ZDOMHERMUO\n-X KUBE-HP-M6MUCKTLG6FWHRAP\nCOMMIT\n"
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.942813381Z" level=info msg="Closing host port tcp:80"
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.942858574Z" level=info msg="Closing host port tcp:443"
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.944419806Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.944445216Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.944614033Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-69cff4fd79-zt99b Namespace:ingress-nginx ID:3636ad76ca29a74430a584ef45207e3242dabac9905d633032e2b878a340c7b9 UID:d85f404b-8c27-4550-ab4c-77691d6ba980 NetNS:/var/run/netns/90578597-018a-466b-aa0c-924b595d9ec1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.944792326Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-69cff4fd79-zt99b from CNI network \"kindnet\" (type=ptp)"
	Jan 16 04:12:02 addons-775662 crio[883]: time="2024-01-16 04:12:02.962508036Z" level=info msg="Stopped pod sandbox: 3636ad76ca29a74430a584ef45207e3242dabac9905d633032e2b878a340c7b9" id=3a287102-8bbf-411b-a53c-83b1f114f80f name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 16 04:12:03 addons-775662 crio[883]: time="2024-01-16 04:12:03.072446161Z" level=info msg="Removing container: 679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965" id=37ecbe38-65ed-4798-a2ef-deabf5a82a66 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 16 04:12:03 addons-775662 crio[883]: time="2024-01-16 04:12:03.096532606Z" level=info msg="Removed container 679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965: ingress-nginx/ingress-nginx-controller-69cff4fd79-zt99b/controller" id=37ecbe38-65ed-4798-a2ef-deabf5a82a66 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ca82e9e8e4042       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             7 seconds ago        Exited              hello-world-app           2                   0fa9c2ba0a758       hello-world-app-5d77478584-xfg2x
	82cd4d586cd31       ghcr.io/headlamp-k8s/headlamp@sha256:0fe50c48c186b89ff3d341dba427174d8232a64c3062af5de854a3a7cb2105ce                        About a minute ago   Running             headlamp                  0                   b5776cf872dde       headlamp-7ddfbb94ff-r7cpl
	860a44ac64c8d       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                              2 minutes ago        Running             nginx                     0                   d9c882ee8a6d2       nginx
	d34ed2836b2fe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago        Running             gcp-auth                  0                   6517ed2a9666e       gcp-auth-d4c87556c-k9bmf
	ed237dad5643f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   3 minutes ago        Exited              patch                     0                   719c85f86663d       ingress-nginx-admission-patch-kvxwt
	7e1c42f0467d4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   3 minutes ago        Exited              create                    0                   b5299b702f73a       ingress-nginx-admission-create-5jpr9
	316b2d2e08c1b       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             4 minutes ago        Running             local-path-provisioner    0                   8a511a734db9c       local-path-provisioner-78b46b4d5c-jfzc4
	d13b3d9c50c89       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago        Running             yakd                      0                   08c36ae4da36c       yakd-dashboard-9947fc6bf-qzzqr
	eb7e7a284f238       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago        Running             coredns                   0                   9602ca20639c3       coredns-5dd5756b68-bcghj
	e5458f673236e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago        Running             storage-provisioner       0                   1d771ce79ba7b       storage-provisioner
	5b614e9c029ef       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago        Running             coredns                   0                   109faba770612       coredns-5dd5756b68-j2nn4
	31a7c84e2beb2       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago        Running             kube-proxy                0                   86f3677972627       kube-proxy-rkmnb
	1e42c37eeba04       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago        Running             kindnet-cni               0                   1b7c82cb2b41d       kindnet-gllrn
	1c24557c27c42       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             5 minutes ago        Running             kube-apiserver            0                   3d4f6f48bb745       kube-apiserver-addons-775662
	3104f9d66b0d9       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             5 minutes ago        Running             kube-controller-manager   0                   26f1b17d99d14       kube-controller-manager-addons-775662
	36cc42d34657f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago        Running             etcd                      0                   a24963e7d9143       etcd-addons-775662
	5c3fd4646f1be       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             5 minutes ago        Running             kube-scheduler            0                   16c30d62e7fd7       kube-scheduler-addons-775662
	
	
	==> coredns [5b614e9c029efc5e0287c89b5e9a80222942369965733bac610f2073da759a60] <==
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48592 - 58540 "HINFO IN 2588845017884245865.2272524591560763161. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014373798s
	[INFO] 10.244.0.12:45880 - 12154 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000419411s
	[INFO] 10.244.0.12:45880 - 18293 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000250267s
	[INFO] 10.244.0.12:56828 - 54898 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000151947s
	[INFO] 10.244.0.12:56828 - 26996 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137498s
	[INFO] 10.244.0.12:47327 - 45365 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.0015803s
	[INFO] 10.244.0.12:47327 - 25143 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001425326s
	[INFO] 10.244.0.21:38786 - 59706 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155746s
	[INFO] 10.244.0.21:39299 - 15952 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000073976s
	[INFO] 10.244.0.21:60707 - 44772 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139836s
	[INFO] 10.244.0.21:46725 - 38276 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073262s
	[INFO] 10.244.0.22:57561 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000517771s
	[INFO] 10.244.0.22:59988 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000863896s
	[INFO] 10.244.0.20:51679 - 48127 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000235005s
	[INFO] 10.244.0.20:51679 - 52736 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000183741s
	[INFO] 10.244.0.20:51679 - 21780 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0001459s
	[INFO] 10.244.0.20:51679 - 1059 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000087661s
	[INFO] 10.244.0.20:51679 - 10953 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000092847s
	[INFO] 10.244.0.20:51679 - 53466 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082443s
	[INFO] 10.244.0.20:51679 - 44606 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001367162s
	[INFO] 10.244.0.20:51679 - 46191 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001081187s
	[INFO] 10.244.0.20:51679 - 14791 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000148271s
	
	
	==> coredns [eb7e7a284f2385045329fe859b5b2e40a28b66350e31afca6f859bb745b15ba5] <==
	[INFO] 10.244.0.20:52775 - 39543 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078127s
	[INFO] 10.244.0.20:36160 - 39685 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004109s
	[INFO] 10.244.0.20:36160 - 32187 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044643s
	[INFO] 10.244.0.20:52775 - 43320 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000073598s
	[INFO] 10.244.0.20:36160 - 6325 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033837s
	[INFO] 10.244.0.20:52775 - 7501 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077159s
	[INFO] 10.244.0.20:36160 - 47828 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032032s
	[INFO] 10.244.0.20:52775 - 27666 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000078661s
	[INFO] 10.244.0.20:36160 - 39708 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034698s
	[INFO] 10.244.0.20:36160 - 3590 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040844s
	[INFO] 10.244.0.20:52775 - 50478 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002012543s
	[INFO] 10.244.0.20:36160 - 16366 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001519469s
	[INFO] 10.244.0.20:52775 - 53459 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001571464s
	[INFO] 10.244.0.20:36160 - 42772 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001253482s
	[INFO] 10.244.0.20:52775 - 43748 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000147976s
	[INFO] 10.244.0.20:36160 - 50186 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074066s
	[INFO] 10.244.0.20:59808 - 18726 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099124s
	[INFO] 10.244.0.20:59808 - 37332 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070366s
	[INFO] 10.244.0.20:59808 - 13922 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000442s
	[INFO] 10.244.0.20:59808 - 54702 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041927s
	[INFO] 10.244.0.20:59808 - 27487 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051781s
	[INFO] 10.244.0.20:59808 - 28105 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048426s
	[INFO] 10.244.0.20:59808 - 38043 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001723624s
	[INFO] 10.244.0.20:59808 - 32902 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001082057s
	[INFO] 10.244.0.20:59808 - 44491 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000512988s
	
	
	==> describe nodes <==
	Name:               addons-775662
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-775662
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=addons-775662
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T04_06_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-775662
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 04:06:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-775662
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 04:12:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 04:11:53 +0000   Tue, 16 Jan 2024 04:06:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 04:11:53 +0000   Tue, 16 Jan 2024 04:06:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 04:11:53 +0000   Tue, 16 Jan 2024 04:06:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 04:11:53 +0000   Tue, 16 Jan 2024 04:07:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-775662
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 b191e48ebeb04b3e9c31dcefb5ff111f
	  System UUID:                1d5e487a-35a3-429d-9bb3-e872b0501e84
	  Boot ID:                    3a165b82-f13d-4880-a2c5-3d4f8ff28eca
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-xfg2x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-k9bmf                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  headlamp                    headlamp-7ddfbb94ff-r7cpl                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 coredns-5dd5756b68-bcghj                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m9s
	  kube-system                 coredns-5dd5756b68-j2nn4                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m9s
	  kube-system                 etcd-addons-775662                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m21s
	  kube-system                 kindnet-gllrn                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m9s
	  kube-system                 kube-apiserver-addons-775662               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-controller-manager-addons-775662      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-proxy-rkmnb                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-scheduler-addons-775662               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  local-path-storage          local-path-provisioner-78b46b4d5c-jfzc4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-qzzqr             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             418Mi (5%!)(MISSING)  646Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m2s                   kube-proxy       
	  Normal  Starting                 5m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m29s (x8 over 5m29s)  kubelet          Node addons-775662 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s (x8 over 5m29s)  kubelet          Node addons-775662 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s (x8 over 5m29s)  kubelet          Node addons-775662 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s                  kubelet          Node addons-775662 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s                  kubelet          Node addons-775662 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s                  kubelet          Node addons-775662 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m10s                  node-controller  Node addons-775662 event: Registered Node addons-775662 in Controller
	  Normal  NodeReady                4m36s                  kubelet          Node addons-775662 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001254] FS-Cache: O-key=[8] '0d683b0000000000'
	[  +0.000781] FS-Cache: N-cookie c=0000008a [p=00000081 fl=2 nc=0 na=1]
	[  +0.001068] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=000000000a6bc490
	[  +0.001149] FS-Cache: N-key=[8] '0d683b0000000000'
	[  +0.002991] FS-Cache: Duplicate cookie detected
	[  +0.000752] FS-Cache: O-cookie c=00000084 [p=00000081 fl=226 nc=0 na=1]
	[  +0.001085] FS-Cache: O-cookie d=00000000b2a3e576{9p.inode} n=00000000fbbfa844
	[  +0.001176] FS-Cache: O-key=[8] '0d683b0000000000'
	[  +0.000789] FS-Cache: N-cookie c=0000008b [p=00000081 fl=2 nc=0 na=1]
	[  +0.001051] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=000000005b84793c
	[  +0.001276] FS-Cache: N-key=[8] '0d683b0000000000'
	[  +3.649618] FS-Cache: Duplicate cookie detected
	[  +0.000785] FS-Cache: O-cookie c=00000082 [p=00000081 fl=226 nc=0 na=1]
	[  +0.001111] FS-Cache: O-cookie d=00000000b2a3e576{9p.inode} n=000000002f304c29
	[  +0.001179] FS-Cache: O-key=[8] '0c683b0000000000'
	[  +0.000798] FS-Cache: N-cookie c=0000008d [p=00000081 fl=2 nc=0 na=1]
	[  +0.001038] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=000000000a6bc490
	[  +0.001170] FS-Cache: N-key=[8] '0c683b0000000000'
	[  +0.411590] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=00000087 [p=00000081 fl=226 nc=0 na=1]
	[  +0.001085] FS-Cache: O-cookie d=00000000b2a3e576{9p.inode} n=00000000eb29d226
	[  +0.001171] FS-Cache: O-key=[8] '12683b0000000000'
	[  +0.000791] FS-Cache: N-cookie c=0000008e [p=00000081 fl=2 nc=0 na=1]
	[  +0.001125] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=00000000761a5e16
	[  +0.001160] FS-Cache: N-key=[8] '12683b0000000000'
	
	
	==> etcd [36cc42d34657f7dcd2d69e34f5d7f8df00436cc8a4e4a7f2a34bc719382488be] <==
	{"level":"info","ts":"2024-01-16T04:06:40.409142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-16T04:06:40.409179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-16T04:06:40.409212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-16T04:06:40.410205Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-775662 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T04:06:40.410473Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T04:06:40.412776Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T04:06:40.413709Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T04:06:40.41385Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-16T04:06:40.424936Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T04:06:40.429164Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T04:06:40.429349Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T04:06:40.429437Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T04:06:40.42952Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T04:06:40.429445Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-01-16T04:07:03.716964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.428787ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128026529288245705 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/ephemeral-volume-controller\" mod_revision:328 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/ephemeral-volume-controller\" value_size:144 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/ephemeral-volume-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-16T04:07:03.725472Z","caller":"traceutil/trace.go:171","msg":"trace[2114027591] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"169.505713ms","start":"2024-01-16T04:07:03.555947Z","end":"2024-01-16T04:07:03.725452Z","steps":["trace[2114027591] 'process raft request'  (duration: 41.729525ms)","trace[2114027591] 'compare'  (duration: 100.35409ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T04:07:03.765415Z","caller":"traceutil/trace.go:171","msg":"trace[731987256] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"208.396016ms","start":"2024-01-16T04:07:03.557006Z","end":"2024-01-16T04:07:03.765402Z","steps":["trace[731987256] 'process raft request'  (duration: 208.130136ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T04:07:03.769164Z","caller":"traceutil/trace.go:171","msg":"trace[1801923125] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"116.789479ms","start":"2024-01-16T04:07:03.652334Z","end":"2024-01-16T04:07:03.769123Z","steps":["trace[1801923125] 'process raft request'  (duration: 113.026736ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T04:07:03.769388Z","caller":"traceutil/trace.go:171","msg":"trace[1687709982] linearizableReadLoop","detail":"{readStateIndex:435; appliedIndex:432; }","duration":"104.563353ms","start":"2024-01-16T04:07:03.664817Z","end":"2024-01-16T04:07:03.76938Z","steps":["trace[1687709982] 'read index received'  (duration: 16.185009ms)","trace[1687709982] 'applied index is now lower than readState.Index'  (duration: 88.377286ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T04:07:03.769503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.697298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-775662\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2024-01-16T04:07:03.769531Z","caller":"traceutil/trace.go:171","msg":"trace[586072915] range","detail":"{range_begin:/registry/minions/addons-775662; range_end:; response_count:1; response_revision:425; }","duration":"104.738651ms","start":"2024-01-16T04:07:03.664786Z","end":"2024-01-16T04:07:03.769524Z","steps":["trace[586072915] 'agreement among raft nodes before linearized reading'  (duration: 104.649981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T04:07:03.769654Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.794198ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T04:07:03.769679Z","caller":"traceutil/trace.go:171","msg":"trace[868109112] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:425; }","duration":"104.819962ms","start":"2024-01-16T04:07:03.664853Z","end":"2024-01-16T04:07:03.769673Z","steps":["trace[868109112] 'agreement among raft nodes before linearized reading'  (duration: 104.780939ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T04:07:03.769801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.909781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-01-16T04:07:03.769827Z","caller":"traceutil/trace.go:171","msg":"trace[2077204373] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:425; }","duration":"104.934141ms","start":"2024-01-16T04:07:03.664886Z","end":"2024-01-16T04:07:03.76982Z","steps":["trace[2077204373] 'agreement among raft nodes before linearized reading'  (duration: 104.892846ms)"],"step_count":1}
	
	
	==> gcp-auth [d34ed2836b2fe5313fd02ac0a7199c4c5aeca665e1b1167c81625378e177af81] <==
	2024/01/16 04:08:45 GCP Auth Webhook started!
	2024/01/16 04:08:58 Ready to marshal response ...
	2024/01/16 04:08:58 Ready to write response ...
	2024/01/16 04:09:03 Ready to marshal response ...
	2024/01/16 04:09:03 Ready to write response ...
	2024/01/16 04:09:21 Ready to marshal response ...
	2024/01/16 04:09:21 Ready to write response ...
	2024/01/16 04:09:24 Ready to marshal response ...
	2024/01/16 04:09:24 Ready to write response ...
	2024/01/16 04:09:52 Ready to marshal response ...
	2024/01/16 04:09:52 Ready to write response ...
	2024/01/16 04:09:52 Ready to marshal response ...
	2024/01/16 04:09:52 Ready to write response ...
	2024/01/16 04:10:00 Ready to marshal response ...
	2024/01/16 04:10:00 Ready to write response ...
	2024/01/16 04:10:08 Ready to marshal response ...
	2024/01/16 04:10:08 Ready to write response ...
	2024/01/16 04:10:08 Ready to marshal response ...
	2024/01/16 04:10:08 Ready to write response ...
	2024/01/16 04:10:08 Ready to marshal response ...
	2024/01/16 04:10:08 Ready to write response ...
	2024/01/16 04:11:42 Ready to marshal response ...
	2024/01/16 04:11:42 Ready to write response ...
	
	
	==> kernel <==
	 04:12:08 up 10:54,  0 users,  load average: 0.89, 1.68, 2.50
	Linux addons-775662 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [1e42c37eeba0422266b5ea293a18c186a2cb272fe6dcb68497cb18b9ac9c8493] <==
	I0116 04:10:02.363672       1 main.go:227] handling current node
	I0116 04:10:12.374340       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:10:12.374372       1 main.go:227] handling current node
	I0116 04:10:22.378904       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:10:22.378933       1 main.go:227] handling current node
	I0116 04:10:32.383090       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:10:32.383119       1 main.go:227] handling current node
	I0116 04:10:42.393816       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:10:42.393846       1 main.go:227] handling current node
	I0116 04:10:52.399157       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:10:52.399190       1 main.go:227] handling current node
	I0116 04:11:02.403044       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:11:02.403072       1 main.go:227] handling current node
	I0116 04:11:12.407207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:11:12.407238       1 main.go:227] handling current node
	I0116 04:11:22.411842       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:11:22.411872       1 main.go:227] handling current node
	I0116 04:11:32.424206       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:11:32.424232       1 main.go:227] handling current node
	I0116 04:11:42.429039       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:11:42.429156       1 main.go:227] handling current node
	I0116 04:11:52.433413       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:11:52.433444       1 main.go:227] handling current node
	I0116 04:12:02.444961       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:12:02.444990       1 main.go:227] handling current node
	
	
	==> kube-apiserver [1c24557c27c423ad2adca3ff83d2c7860c713d0f206d63725b22eb79e33e4e52] <==
	W0116 04:09:17.082653       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0116 04:09:21.708167       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0116 04:09:22.099481       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.27.52"}
	I0116 04:09:40.471436       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 04:09:40.471484       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 04:09:40.487154       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 04:09:40.487212       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 04:09:40.516876       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 04:09:40.516933       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 04:09:40.521486       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 04:09:40.521536       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 04:09:40.540904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 04:09:40.540970       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 04:09:40.545075       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 04:09:40.545128       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 04:09:40.562813       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 04:09:40.562880       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0116 04:09:40.570055       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0116 04:09:40.570104       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0116 04:09:41.522484       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0116 04:09:41.570277       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0116 04:09:41.592588       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0116 04:10:04.317908       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0116 04:10:08.169013       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.106.110"}
	I0116 04:11:42.597905       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.27.73"}
	
	
	==> kube-controller-manager [3104f9d66b0d9e714b770d97a86007e470c3c218775525423c46cf71c8d3ebc1] <==
	W0116 04:11:01.126948       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 04:11:01.126983       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 04:11:11.765424       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 04:11:11.765460       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 04:11:22.868444       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 04:11:22.868479       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 04:11:42.275966       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0116 04:11:42.312353       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-xfg2x"
	I0116 04:11:42.323313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.576434ms"
	I0116 04:11:42.336575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.017795ms"
	I0116 04:11:42.336895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="56.22µs"
	I0116 04:11:42.349184       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="240.798µs"
	I0116 04:11:45.035927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.443µs"
	I0116 04:11:46.033604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.747µs"
	I0116 04:11:47.035679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.788µs"
	W0116 04:11:57.462716       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 04:11:57.462745       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0116 04:11:59.270075       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 04:11:59.270109       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0116 04:11:59.752981       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0116 04:11:59.753564       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="6.638µs"
	I0116 04:11:59.757486       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0116 04:12:01.082129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="59.453µs"
	W0116 04:12:07.874309       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0116 04:12:07.874345       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [31a7c84e2beb2e15f37884bb50aaadad8bccc0bed1da9069580e535a6f132f2e] <==
	I0116 04:07:05.815046       1 server_others.go:69] "Using iptables proxy"
	I0116 04:07:05.888300       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0116 04:07:06.132887       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0116 04:07:06.141310       1 server_others.go:152] "Using iptables Proxier"
	I0116 04:07:06.141360       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0116 04:07:06.141367       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0116 04:07:06.141452       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 04:07:06.141894       1 server.go:846] "Version info" version="v1.28.4"
	I0116 04:07:06.141915       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 04:07:06.145864       1 config.go:188] "Starting service config controller"
	I0116 04:07:06.145970       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 04:07:06.146019       1 config.go:97] "Starting endpoint slice config controller"
	I0116 04:07:06.146067       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 04:07:06.146709       1 config.go:315] "Starting node config controller"
	I0116 04:07:06.146778       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 04:07:06.247044       1 shared_informer.go:318] Caches are synced for node config
	I0116 04:07:06.247151       1 shared_informer.go:318] Caches are synced for service config
	I0116 04:07:06.247163       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5c3fd4646f1bea194b19dc2c00b7584d0a51e7d87dd37dc711097c1e0a3be0b5] <==
	W0116 04:06:43.816489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 04:06:43.817246       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 04:06:44.664653       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 04:06:44.664809       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 04:06:44.671016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 04:06:44.671138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 04:06:44.706849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 04:06:44.706936       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 04:06:44.761955       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 04:06:44.762005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 04:06:44.771039       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 04:06:44.771079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 04:06:44.777481       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 04:06:44.777589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 04:06:44.789694       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 04:06:44.790248       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 04:06:44.834543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 04:06:44.834665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 04:06:44.864151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 04:06:44.864270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 04:06:44.890158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 04:06:44.890196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 04:06:44.898210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 04:06:44.898250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0116 04:06:47.678063       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 16 04:11:47 addons-775662 kubelet[1352]: E0116 04:11:47.070226    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4cf4d92d64ed03e169a8826b354ec790c2cdb900529cbfb0e8fd80584c1eb248/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4cf4d92d64ed03e169a8826b354ec790c2cdb900529cbfb0e8fd80584c1eb248/diff: no such file or directory, extraDiskErr: <nil>
	Jan 16 04:11:55 addons-775662 kubelet[1352]: I0116 04:11:55.873362    1352 scope.go:117] "RemoveContainer" containerID="cfdee6bb6114fd5184c3eab7434f702124f07601890eda1c60c6cc39b5f2134b"
	Jan 16 04:11:55 addons-775662 kubelet[1352]: E0116 04:11:55.873598    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(73efc9e5-d9e9-4a77-8f6d-1657ed39093f)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="73efc9e5-d9e9-4a77-8f6d-1657ed39093f"
	Jan 16 04:11:58 addons-775662 kubelet[1352]: I0116 04:11:58.534421    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlmvw\" (UniqueName: \"kubernetes.io/projected/73efc9e5-d9e9-4a77-8f6d-1657ed39093f-kube-api-access-xlmvw\") pod \"73efc9e5-d9e9-4a77-8f6d-1657ed39093f\" (UID: \"73efc9e5-d9e9-4a77-8f6d-1657ed39093f\") "
	Jan 16 04:11:58 addons-775662 kubelet[1352]: I0116 04:11:58.539416    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73efc9e5-d9e9-4a77-8f6d-1657ed39093f-kube-api-access-xlmvw" (OuterVolumeSpecName: "kube-api-access-xlmvw") pod "73efc9e5-d9e9-4a77-8f6d-1657ed39093f" (UID: "73efc9e5-d9e9-4a77-8f6d-1657ed39093f"). InnerVolumeSpecName "kube-api-access-xlmvw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 04:11:58 addons-775662 kubelet[1352]: I0116 04:11:58.634766    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xlmvw\" (UniqueName: \"kubernetes.io/projected/73efc9e5-d9e9-4a77-8f6d-1657ed39093f-kube-api-access-xlmvw\") on node \"addons-775662\" DevicePath \"\""
	Jan 16 04:11:59 addons-775662 kubelet[1352]: I0116 04:11:59.046127    1352 scope.go:117] "RemoveContainer" containerID="cfdee6bb6114fd5184c3eab7434f702124f07601890eda1c60c6cc39b5f2134b"
	Jan 16 04:12:00 addons-775662 kubelet[1352]: I0116 04:12:00.872304    1352 scope.go:117] "RemoveContainer" containerID="4235e705f977c5062a8dd262691b73cf699122dc97ff07ee988c8fd501a45543"
	Jan 16 04:12:00 addons-775662 kubelet[1352]: I0116 04:12:00.873926    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1c463a03-1959-4729-8cdf-7458886a2272" path="/var/lib/kubelet/pods/1c463a03-1959-4729-8cdf-7458886a2272/volumes"
	Jan 16 04:12:00 addons-775662 kubelet[1352]: I0116 04:12:00.874393    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="73efc9e5-d9e9-4a77-8f6d-1657ed39093f" path="/var/lib/kubelet/pods/73efc9e5-d9e9-4a77-8f6d-1657ed39093f/volumes"
	Jan 16 04:12:00 addons-775662 kubelet[1352]: I0116 04:12:00.874897    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fb0a9e30-2d09-478a-aa51-106079a39214" path="/var/lib/kubelet/pods/fb0a9e30-2d09-478a-aa51-106079a39214/volumes"
	Jan 16 04:12:01 addons-775662 kubelet[1352]: I0116 04:12:01.064239    1352 scope.go:117] "RemoveContainer" containerID="4235e705f977c5062a8dd262691b73cf699122dc97ff07ee988c8fd501a45543"
	Jan 16 04:12:01 addons-775662 kubelet[1352]: I0116 04:12:01.064451    1352 scope.go:117] "RemoveContainer" containerID="ca82e9e8e4042f4fad2e89d08d284df33f7745acfd4ba06ffd06bfe0cd86c086"
	Jan 16 04:12:01 addons-775662 kubelet[1352]: E0116 04:12:01.064706    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-xfg2x_default(e3630885-7a4d-4672-9d6e-f14e4e7d02f4)\"" pod="default/hello-world-app-5d77478584-xfg2x" podUID="e3630885-7a4d-4672-9d6e-f14e4e7d02f4"
	Jan 16 04:12:03 addons-775662 kubelet[1352]: I0116 04:12:03.070288    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d85f404b-8c27-4550-ab4c-77691d6ba980-webhook-cert\") pod \"d85f404b-8c27-4550-ab4c-77691d6ba980\" (UID: \"d85f404b-8c27-4550-ab4c-77691d6ba980\") "
	Jan 16 04:12:03 addons-775662 kubelet[1352]: I0116 04:12:03.070358    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm2lx\" (UniqueName: \"kubernetes.io/projected/d85f404b-8c27-4550-ab4c-77691d6ba980-kube-api-access-tm2lx\") pod \"d85f404b-8c27-4550-ab4c-77691d6ba980\" (UID: \"d85f404b-8c27-4550-ab4c-77691d6ba980\") "
	Jan 16 04:12:03 addons-775662 kubelet[1352]: I0116 04:12:03.070952    1352 scope.go:117] "RemoveContainer" containerID="679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965"
	Jan 16 04:12:03 addons-775662 kubelet[1352]: I0116 04:12:03.073958    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85f404b-8c27-4550-ab4c-77691d6ba980-kube-api-access-tm2lx" (OuterVolumeSpecName: "kube-api-access-tm2lx") pod "d85f404b-8c27-4550-ab4c-77691d6ba980" (UID: "d85f404b-8c27-4550-ab4c-77691d6ba980"). InnerVolumeSpecName "kube-api-access-tm2lx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 04:12:03 addons-775662 kubelet[1352]: I0116 04:12:03.075496    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85f404b-8c27-4550-ab4c-77691d6ba980-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d85f404b-8c27-4550-ab4c-77691d6ba980" (UID: "d85f404b-8c27-4550-ab4c-77691d6ba980"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 04:12:03 addons-775662 kubelet[1352]: I0116 04:12:03.097424    1352 scope.go:117] "RemoveContainer" containerID="679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965"
	Jan 16 04:12:03 addons-775662 kubelet[1352]: E0116 04:12:03.097883    1352 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965\": container with ID starting with 679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965 not found: ID does not exist" containerID="679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965"
	Jan 16 04:12:03 addons-775662 kubelet[1352]: I0116 04:12:03.097931    1352 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965"} err="failed to get container status \"679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965\": rpc error: code = NotFound desc = could not find container \"679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965\": container with ID starting with 679cc5bfb00d54532a9f6e7a3d9f1b0df07a690e3dd0b2faf1378a985116d965 not found: ID does not exist"
	Jan 16 04:12:03 addons-775662 kubelet[1352]: I0116 04:12:03.176912    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tm2lx\" (UniqueName: \"kubernetes.io/projected/d85f404b-8c27-4550-ab4c-77691d6ba980-kube-api-access-tm2lx\") on node \"addons-775662\" DevicePath \"\""
	Jan 16 04:12:03 addons-775662 kubelet[1352]: I0116 04:12:03.177288    1352 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d85f404b-8c27-4550-ab4c-77691d6ba980-webhook-cert\") on node \"addons-775662\" DevicePath \"\""
	Jan 16 04:12:04 addons-775662 kubelet[1352]: I0116 04:12:04.873282    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d85f404b-8c27-4550-ab4c-77691d6ba980" path="/var/lib/kubelet/pods/d85f404b-8c27-4550-ab4c-77691d6ba980/volumes"
	
	
	==> storage-provisioner [e5458f673236e04d2d42cdf5ad0220229786b85b143eaa8a3902253418e152cc] <==
	I0116 04:07:33.674446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 04:07:33.691688       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 04:07:33.691964       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 04:07:33.702830       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 04:07:33.703100       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-775662_ce068f61-6ec7-4f7a-b800-ff9b66a7001e!
	I0116 04:07:33.711299       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc6aa4c3-765f-4262-9f32-bda0e30d84b5", APIVersion:"v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-775662_ce068f61-6ec7-4f7a-b800-ff9b66a7001e became leader
	I0116 04:07:33.805959       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-775662_ce068f61-6ec7-4f7a-b800-ff9b66a7001e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-775662 -n addons-775662
helpers_test.go:261: (dbg) Run:  kubectl --context addons-775662 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-032172 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-032172 image ls --format json --alsologtostderr:
I0116 04:17:14.841773 2448240 out.go:296] Setting OutFile to fd 1 ...
I0116 04:17:14.842124 2448240 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:14.842137 2448240 out.go:309] Setting ErrFile to fd 2...
I0116 04:17:14.842144 2448240 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:14.842493 2448240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
I0116 04:17:14.843976 2448240 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:14.846773 2448240 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:14.847414 2448240 cli_runner.go:164] Run: docker container inspect functional-032172 --format={{.State.Status}}
I0116 04:17:14.868657 2448240 ssh_runner.go:195] Run: systemctl --version
I0116 04:17:14.868717 2448240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032172
I0116 04:17:14.895303 2448240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35326 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/functional-032172/id_rsa Username:docker}
I0116 04:17:15.028645 2448240 ssh_runner.go:195] Run: sudo crictl images --output json
W0116 04:17:15.183948 2448240 cache_images.go:715] Failed to list images for profile functional-032172 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0116 04:17:15.179824    7077 remote_image.go:136] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error locating item named \"manifest\" for image with ID \"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02\": file does not exist" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},},}"
time="2024-01-16T04:17:15Z" level=fatal msg="listing images: rpc error: code = Unknown desc = error locating item named \"manifest\" for image with ID \"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02\": file does not exist"
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (177.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-865845 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-865845 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.141697147s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-865845 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-865845 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9ce20815-f75b-472f-a141-61c5f3b1b77c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0116 04:19:14.415779 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
helpers_test.go:344: "nginx" [9ce20815-f75b-472f-a141-61c5f3b1b77c] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.003391248s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-865845 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0116 04:21:17.543379 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:17.548770 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:17.559042 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:17.579413 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:17.619730 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:17.700083 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:17.860478 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:18.181085 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:18.822036 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:20.102281 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:22.662539 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:21:27.783224 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-865845 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.520617566s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-865845 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-865845 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0116 04:21:38.023481 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.025504875s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-865845 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-865845 addons disable ingress-dns --alsologtostderr -v=1: (1.796486423s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-865845 addons disable ingress --alsologtostderr -v=1
E0116 04:21:58.503726 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-865845 addons disable ingress --alsologtostderr -v=1: (7.614637983s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-865845
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-865845:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "97f513553653247b9fb944a02fce2cc906feadb462e0cf935a088d0e79104371",
	        "Created": "2024-01-16T04:17:40.063412676Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2449111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T04:17:40.406310771Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/97f513553653247b9fb944a02fce2cc906feadb462e0cf935a088d0e79104371/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/97f513553653247b9fb944a02fce2cc906feadb462e0cf935a088d0e79104371/hostname",
	        "HostsPath": "/var/lib/docker/containers/97f513553653247b9fb944a02fce2cc906feadb462e0cf935a088d0e79104371/hosts",
	        "LogPath": "/var/lib/docker/containers/97f513553653247b9fb944a02fce2cc906feadb462e0cf935a088d0e79104371/97f513553653247b9fb944a02fce2cc906feadb462e0cf935a088d0e79104371-json.log",
	        "Name": "/ingress-addon-legacy-865845",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-865845:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-865845",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8cd05cfeb4d80c5c561039657d33938fb1dfbac2e3ad1f21ee496859994d5b8a-init/diff:/var/lib/docker/overlay2/4fdef913b89fa4836b2db5064ca9b972974c59582e71c63616575ab943b0844e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8cd05cfeb4d80c5c561039657d33938fb1dfbac2e3ad1f21ee496859994d5b8a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8cd05cfeb4d80c5c561039657d33938fb1dfbac2e3ad1f21ee496859994d5b8a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8cd05cfeb4d80c5c561039657d33938fb1dfbac2e3ad1f21ee496859994d5b8a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-865845",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-865845/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-865845",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-865845",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-865845",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74112544e590edb7420c1806a41c78b0f8b8857a55f07c3ed1c4d5c2782f4aa9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35331"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35330"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35327"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35329"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35328"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/74112544e590",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-865845": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "97f513553653",
	                        "ingress-addon-legacy-865845"
	                    ],
	                    "NetworkID": "00f215502b801c414a9fc876386a246e78523ed8cd8882b3a8a079b2a00bad54",
	                    "EndpointID": "a725fac2d842afd67c5e027626c86afe17df71c3361826f4be187b3cbe16ac2d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-865845 -n ingress-addon-legacy-865845
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-865845 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-865845 logs -n 25: (1.583376394s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-032172                                                   | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109335110/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-032172 ssh findmnt                                          | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-032172                                                   | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109335110/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-032172 ssh findmnt                                          | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-032172 ssh findmnt                                          | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-032172 ssh findmnt                                          | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-032172                                                   | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-032172                                                      | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-032172                                                      | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-032172                                                      | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-032172                                                      | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-032172                                                      | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-032172 ssh pgrep                                            | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-032172 image build -t                                       | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | localhost/my-image:functional-032172                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-032172                                                      | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-032172                                                      | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-032172 image ls                                             | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	| delete         | -p functional-032172                                                   | functional-032172           | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:17 UTC |
	| start          | -p ingress-addon-legacy-865845                                         | ingress-addon-legacy-865845 | jenkins | v1.32.0 | 16 Jan 24 04:17 UTC | 16 Jan 24 04:18 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-865845                                            | ingress-addon-legacy-865845 | jenkins | v1.32.0 | 16 Jan 24 04:18 UTC | 16 Jan 24 04:19 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-865845                                            | ingress-addon-legacy-865845 | jenkins | v1.32.0 | 16 Jan 24 04:19 UTC | 16 Jan 24 04:19 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-865845                                            | ingress-addon-legacy-865845 | jenkins | v1.32.0 | 16 Jan 24 04:19 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-865845 ip                                         | ingress-addon-legacy-865845 | jenkins | v1.32.0 | 16 Jan 24 04:21 UTC | 16 Jan 24 04:21 UTC |
	| addons         | ingress-addon-legacy-865845                                            | ingress-addon-legacy-865845 | jenkins | v1.32.0 | 16 Jan 24 04:21 UTC | 16 Jan 24 04:21 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-865845                                            | ingress-addon-legacy-865845 | jenkins | v1.32.0 | 16 Jan 24 04:21 UTC | 16 Jan 24 04:21 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 04:17:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 04:17:19.690861 2448659 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:17:19.691049 2448659 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:17:19.691071 2448659 out.go:309] Setting ErrFile to fd 2...
	I0116 04:17:19.691097 2448659 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:17:19.691409 2448659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:17:19.691902 2448659 out.go:303] Setting JSON to false
	I0116 04:17:19.692884 2448659 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39571,"bootTime":1705339069,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0116 04:17:19.693002 2448659 start.go:138] virtualization:  
	I0116 04:17:19.696480 2448659 out.go:177] * [ingress-addon-legacy-865845] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 04:17:19.698940 2448659 notify.go:220] Checking for updates...
	I0116 04:17:19.703965 2448659 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 04:17:19.706675 2448659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:17:19.708830 2448659 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:17:19.711036 2448659 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	I0116 04:17:19.713547 2448659 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 04:17:19.716259 2448659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:17:19.718790 2448659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:17:19.744994 2448659 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:17:19.745133 2448659 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:17:19.821401 2448659 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-16 04:17:19.810860714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:17:19.821515 2448659 docker.go:295] overlay module found
	I0116 04:17:19.823985 2448659 out.go:177] * Using the docker driver based on user configuration
	I0116 04:17:19.826326 2448659 start.go:298] selected driver: docker
	I0116 04:17:19.826351 2448659 start.go:902] validating driver "docker" against <nil>
	I0116 04:17:19.826366 2448659 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:17:19.827015 2448659 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:17:19.901536 2448659 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2024-01-16 04:17:19.891867199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:17:19.901689 2448659 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 04:17:19.902015 2448659 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 04:17:19.905087 2448659 out.go:177] * Using Docker driver with root privileges
	I0116 04:17:19.907464 2448659 cni.go:84] Creating CNI manager for ""
	I0116 04:17:19.907485 2448659 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 04:17:19.907497 2448659 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 04:17:19.907509 2448659 start_flags.go:321] config:
	{Name:ingress-addon-legacy-865845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-865845 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:17:19.910509 2448659 out.go:177] * Starting control plane node ingress-addon-legacy-865845 in cluster ingress-addon-legacy-865845
	I0116 04:17:19.913856 2448659 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 04:17:19.916055 2448659 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 04:17:19.918052 2448659 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 04:17:19.918267 2448659 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 04:17:19.940534 2448659 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 04:17:19.940561 2448659 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 04:17:19.982844 2448659 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0116 04:17:19.982887 2448659 cache.go:56] Caching tarball of preloaded images
	I0116 04:17:19.983089 2448659 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 04:17:19.985203 2448659 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0116 04:17:19.987418 2448659 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0116 04:17:20.097708 2448659 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0116 04:17:32.097665 2448659 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0116 04:17:32.097787 2448659 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0116 04:17:33.304959 2448659 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0116 04:17:33.305324 2448659 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/config.json ...
	I0116 04:17:33.305356 2448659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/config.json: {Name:mk9150cfa85c7e4dfc19c4465916cd7e5fbb3932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:17:33.305536 2448659 cache.go:194] Successfully downloaded all kic artifacts
	I0116 04:17:33.305597 2448659 start.go:365] acquiring machines lock for ingress-addon-legacy-865845: {Name:mk48096242c5fb78d0947f8f02a18d56a594d693 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:17:33.305649 2448659 start.go:369] acquired machines lock for "ingress-addon-legacy-865845" in 37.989µs
	I0116 04:17:33.305668 2448659 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-865845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-865845 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 04:17:33.305732 2448659 start.go:125] createHost starting for "" (driver="docker")
	I0116 04:17:33.308183 2448659 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0116 04:17:33.308406 2448659 start.go:159] libmachine.API.Create for "ingress-addon-legacy-865845" (driver="docker")
	I0116 04:17:33.308443 2448659 client.go:168] LocalClient.Create starting
	I0116 04:17:33.308540 2448659 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem
	I0116 04:17:33.308576 2448659 main.go:141] libmachine: Decoding PEM data...
	I0116 04:17:33.308594 2448659 main.go:141] libmachine: Parsing certificate...
	I0116 04:17:33.308654 2448659 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem
	I0116 04:17:33.308674 2448659 main.go:141] libmachine: Decoding PEM data...
	I0116 04:17:33.308692 2448659 main.go:141] libmachine: Parsing certificate...
	I0116 04:17:33.309062 2448659 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-865845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 04:17:33.325995 2448659 cli_runner.go:211] docker network inspect ingress-addon-legacy-865845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 04:17:33.326078 2448659 network_create.go:281] running [docker network inspect ingress-addon-legacy-865845] to gather additional debugging logs...
	I0116 04:17:33.326100 2448659 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-865845
	W0116 04:17:33.344479 2448659 cli_runner.go:211] docker network inspect ingress-addon-legacy-865845 returned with exit code 1
	I0116 04:17:33.344508 2448659 network_create.go:284] error running [docker network inspect ingress-addon-legacy-865845]: docker network inspect ingress-addon-legacy-865845: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-865845 not found
	I0116 04:17:33.344522 2448659 network_create.go:286] output of [docker network inspect ingress-addon-legacy-865845]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-865845 not found
	
	** /stderr **
	I0116 04:17:33.344624 2448659 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 04:17:33.367852 2448659 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40000f9b90}
	I0116 04:17:33.367891 2448659 network_create.go:124] attempt to create docker network ingress-addon-legacy-865845 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0116 04:17:33.367953 2448659 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-865845 ingress-addon-legacy-865845
	I0116 04:17:33.445229 2448659 network_create.go:108] docker network ingress-addon-legacy-865845 192.168.49.0/24 created
	I0116 04:17:33.445266 2448659 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-865845" container
	I0116 04:17:33.445340 2448659 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 04:17:33.462402 2448659 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-865845 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-865845 --label created_by.minikube.sigs.k8s.io=true
	I0116 04:17:33.481588 2448659 oci.go:103] Successfully created a docker volume ingress-addon-legacy-865845
	I0116 04:17:33.481689 2448659 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-865845-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-865845 --entrypoint /usr/bin/test -v ingress-addon-legacy-865845:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 04:17:34.982420 2448659 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-865845-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-865845 --entrypoint /usr/bin/test -v ingress-addon-legacy-865845:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.500688336s)
	I0116 04:17:34.982458 2448659 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-865845
	I0116 04:17:34.982483 2448659 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 04:17:34.982503 2448659 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 04:17:34.982594 2448659 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-865845:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 04:17:39.955189 2448659 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-865845:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.972546055s)
	I0116 04:17:39.955224 2448659 kic.go:203] duration metric: took 4.972718 seconds to extract preloaded images to volume
	W0116 04:17:39.955362 2448659 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 04:17:39.955483 2448659 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 04:17:40.041487 2448659 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-865845 --name ingress-addon-legacy-865845 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-865845 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-865845 --network ingress-addon-legacy-865845 --ip 192.168.49.2 --volume ingress-addon-legacy-865845:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 04:17:40.415810 2448659 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865845 --format={{.State.Running}}
	I0116 04:17:40.443393 2448659 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865845 --format={{.State.Status}}
	I0116 04:17:40.471541 2448659 cli_runner.go:164] Run: docker exec ingress-addon-legacy-865845 stat /var/lib/dpkg/alternatives/iptables
	I0116 04:17:40.555748 2448659 oci.go:144] the created container "ingress-addon-legacy-865845" has a running status.
	I0116 04:17:40.555782 2448659 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa...
	I0116 04:17:41.559028 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 04:17:41.559125 2448659 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 04:17:41.586558 2448659 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865845 --format={{.State.Status}}
	I0116 04:17:41.615393 2448659 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 04:17:41.615424 2448659 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-865845 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 04:17:41.680644 2448659 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865845 --format={{.State.Status}}
	I0116 04:17:41.700929 2448659 machine.go:88] provisioning docker machine ...
	I0116 04:17:41.700962 2448659 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-865845"
	I0116 04:17:41.701036 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:17:41.728688 2448659 main.go:141] libmachine: Using SSH client type: native
	I0116 04:17:41.729351 2448659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35331 <nil> <nil>}
	I0116 04:17:41.729377 2448659 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-865845 && echo "ingress-addon-legacy-865845" | sudo tee /etc/hostname
	I0116 04:17:41.898575 2448659 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-865845
	
	I0116 04:17:41.898667 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:17:41.920707 2448659 main.go:141] libmachine: Using SSH client type: native
	I0116 04:17:41.922829 2448659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35331 <nil> <nil>}
	I0116 04:17:41.922865 2448659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-865845' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-865845/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-865845' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 04:17:42.075566 2448659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 04:17:42.075602 2448659 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17965-2415678/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-2415678/.minikube}
	I0116 04:17:42.075641 2448659 ubuntu.go:177] setting up certificates
	I0116 04:17:42.075652 2448659 provision.go:83] configureAuth start
	I0116 04:17:42.075762 2448659 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-865845
	I0116 04:17:42.103251 2448659 provision.go:138] copyHostCerts
	I0116 04:17:42.103302 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem
	I0116 04:17:42.103340 2448659 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem, removing ...
	I0116 04:17:42.103348 2448659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem
	I0116 04:17:42.103441 2448659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem (1078 bytes)
	I0116 04:17:42.103613 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem
	I0116 04:17:42.103638 2448659 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem, removing ...
	I0116 04:17:42.103644 2448659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem
	I0116 04:17:42.103677 2448659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem (1123 bytes)
	I0116 04:17:42.103785 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem
	I0116 04:17:42.103830 2448659 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem, removing ...
	I0116 04:17:42.103835 2448659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem
	I0116 04:17:42.103870 2448659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem (1679 bytes)
	I0116 04:17:42.103932 2448659 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-865845 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-865845]
	I0116 04:17:43.151487 2448659 provision.go:172] copyRemoteCerts
	I0116 04:17:43.151587 2448659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 04:17:43.151637 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:17:43.172053 2448659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35331 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa Username:docker}
	I0116 04:17:43.272329 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 04:17:43.272400 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 04:17:43.302729 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 04:17:43.302796 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0116 04:17:43.333944 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 04:17:43.334065 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 04:17:43.365213 2448659 provision.go:86] duration metric: configureAuth took 1.289545101s
	I0116 04:17:43.365240 2448659 ubuntu.go:193] setting minikube options for container-runtime
	I0116 04:17:43.365445 2448659 config.go:182] Loaded profile config "ingress-addon-legacy-865845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 04:17:43.365567 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:17:43.386643 2448659 main.go:141] libmachine: Using SSH client type: native
	I0116 04:17:43.387087 2448659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35331 <nil> <nil>}
	I0116 04:17:43.387110 2448659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 04:17:43.676984 2448659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 04:17:43.677010 2448659 machine.go:91] provisioned docker machine in 1.976058441s
	I0116 04:17:43.677020 2448659 client.go:171] LocalClient.Create took 10.368568847s
	I0116 04:17:43.677034 2448659 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-865845" took 10.368627307s
	I0116 04:17:43.677044 2448659 start.go:300] post-start starting for "ingress-addon-legacy-865845" (driver="docker")
	I0116 04:17:43.677056 2448659 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 04:17:43.677139 2448659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 04:17:43.677187 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:17:43.695720 2448659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35331 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa Username:docker}
	I0116 04:17:43.796948 2448659 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 04:17:43.801818 2448659 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 04:17:43.801859 2448659 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 04:17:43.801871 2448659 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 04:17:43.801879 2448659 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 04:17:43.801891 2448659 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-2415678/.minikube/addons for local assets ...
	I0116 04:17:43.801981 2448659 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-2415678/.minikube/files for local assets ...
	I0116 04:17:43.802073 2448659 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem -> 24210052.pem in /etc/ssl/certs
	I0116 04:17:43.802086 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem -> /etc/ssl/certs/24210052.pem
	I0116 04:17:43.802212 2448659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 04:17:43.813824 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem --> /etc/ssl/certs/24210052.pem (1708 bytes)
	I0116 04:17:43.846857 2448659 start.go:303] post-start completed in 169.79446ms
	I0116 04:17:43.847341 2448659 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-865845
	I0116 04:17:43.867113 2448659 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/config.json ...
	I0116 04:17:43.867435 2448659 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 04:17:43.867486 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:17:43.895216 2448659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35331 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa Username:docker}
	I0116 04:17:43.991717 2448659 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 04:17:43.998222 2448659 start.go:128] duration metric: createHost completed in 10.692473553s
	I0116 04:17:43.998251 2448659 start.go:83] releasing machines lock for "ingress-addon-legacy-865845", held for 10.692592368s
	I0116 04:17:43.998330 2448659 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-865845
	I0116 04:17:44.019406 2448659 ssh_runner.go:195] Run: cat /version.json
	I0116 04:17:44.019479 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:17:44.019774 2448659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 04:17:44.019850 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:17:44.041759 2448659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35331 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa Username:docker}
	I0116 04:17:44.050356 2448659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35331 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa Username:docker}
	I0116 04:17:44.141727 2448659 ssh_runner.go:195] Run: systemctl --version
	I0116 04:17:44.282269 2448659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 04:17:44.431389 2448659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 04:17:44.437198 2448659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:17:44.462484 2448659 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 04:17:44.462575 2448659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:17:44.511611 2448659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 04:17:44.511700 2448659 start.go:475] detecting cgroup driver to use...
	I0116 04:17:44.511770 2448659 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 04:17:44.511887 2448659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 04:17:44.532620 2448659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 04:17:44.547980 2448659 docker.go:217] disabling cri-docker service (if available) ...
	I0116 04:17:44.548073 2448659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 04:17:44.566202 2448659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 04:17:44.584478 2448659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 04:17:44.684632 2448659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 04:17:44.795585 2448659 docker.go:233] disabling docker service ...
	I0116 04:17:44.795695 2448659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 04:17:44.819796 2448659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 04:17:44.836077 2448659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 04:17:44.951322 2448659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 04:17:45.077456 2448659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 04:17:45.097059 2448659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 04:17:45.128832 2448659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0116 04:17:45.128924 2448659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:17:45.146736 2448659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 04:17:45.146945 2448659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:17:45.164468 2448659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:17:45.180629 2448659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:17:45.195599 2448659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 04:17:45.209940 2448659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 04:17:45.223244 2448659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 04:17:45.236452 2448659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 04:17:45.358840 2448659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 04:17:45.498389 2448659 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 04:17:45.498468 2448659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 04:17:45.503944 2448659 start.go:543] Will wait 60s for crictl version
	I0116 04:17:45.504010 2448659 ssh_runner.go:195] Run: which crictl
	I0116 04:17:45.508780 2448659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 04:17:45.553839 2448659 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 04:17:45.553996 2448659 ssh_runner.go:195] Run: crio --version
	I0116 04:17:45.604333 2448659 ssh_runner.go:195] Run: crio --version
	I0116 04:17:45.652793 2448659 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0116 04:17:45.655082 2448659 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-865845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 04:17:45.673776 2448659 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0116 04:17:45.678748 2448659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:17:45.693079 2448659 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0116 04:17:45.693187 2448659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:17:45.746813 2448659 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 04:17:45.746897 2448659 ssh_runner.go:195] Run: which lz4
	I0116 04:17:45.752226 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0116 04:17:45.752335 2448659 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 04:17:45.757294 2448659 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 04:17:45.757330 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0116 04:17:48.185308 2448659 crio.go:444] Took 2.433011 seconds to copy over tarball
	I0116 04:17:48.185442 2448659 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 04:17:50.945105 2448659 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.759612207s)
	I0116 04:17:50.945131 2448659 crio.go:451] Took 2.759744 seconds to extract the tarball
	I0116 04:17:50.945140 2448659 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 04:17:51.074887 2448659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:17:51.119020 2448659 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0116 04:17:51.119053 2448659 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0116 04:17:51.119096 2448659 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 04:17:51.119132 2448659 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 04:17:51.119323 2448659 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 04:17:51.119352 2448659 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0116 04:17:51.119408 2448659 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 04:17:51.119435 2448659 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0116 04:17:51.119481 2448659 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 04:17:51.119501 2448659 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0116 04:17:51.120716 2448659 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 04:17:51.121213 2448659 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 04:17:51.121424 2448659 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 04:17:51.121566 2448659 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0116 04:17:51.121702 2448659 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0116 04:17:51.121836 2448659 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 04:17:51.122092 2448659 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 04:17:51.122251 2448659 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0116 04:17:51.466712 2448659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0116 04:17:51.479693 2448659 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 04:17:51.480008 2448659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0116 04:17:51.485700 2448659 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0116 04:17:51.485992 2448659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0116 04:17:51.489032 2448659 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 04:17:51.489301 2448659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0116 04:17:51.498780 2448659 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 04:17:51.499222 2448659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0116 04:17:51.541810 2448659 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0116 04:17:51.542072 2448659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0116 04:17:51.544894 2448659 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0116 04:17:51.545158 2448659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0116 04:17:51.582013 2448659 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0116 04:17:51.582130 2448659 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0116 04:17:51.582206 2448659 ssh_runner.go:195] Run: which crictl
	W0116 04:17:51.654063 2448659 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0116 04:17:51.654309 2448659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 04:17:51.685010 2448659 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0116 04:17:51.685118 2448659 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0116 04:17:51.685211 2448659 ssh_runner.go:195] Run: which crictl
	I0116 04:17:51.685362 2448659 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0116 04:17:51.685429 2448659 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0116 04:17:51.685516 2448659 ssh_runner.go:195] Run: which crictl
	I0116 04:17:51.685625 2448659 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0116 04:17:51.685676 2448659 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 04:17:51.685719 2448659 ssh_runner.go:195] Run: which crictl
	I0116 04:17:51.685853 2448659 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0116 04:17:51.685898 2448659 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0116 04:17:51.686008 2448659 ssh_runner.go:195] Run: which crictl
	I0116 04:17:51.745832 2448659 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0116 04:17:51.745951 2448659 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0116 04:17:51.746068 2448659 ssh_runner.go:195] Run: which crictl
	I0116 04:17:51.751073 2448659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0116 04:17:51.751261 2448659 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0116 04:17:51.751325 2448659 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0116 04:17:51.751406 2448659 ssh_runner.go:195] Run: which crictl
	I0116 04:17:51.855146 2448659 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0116 04:17:51.855208 2448659 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 04:17:51.855267 2448659 ssh_runner.go:195] Run: which crictl
	I0116 04:17:51.855397 2448659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0116 04:17:51.855475 2448659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0116 04:17:51.855530 2448659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0116 04:17:51.855583 2448659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0116 04:17:51.855657 2448659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0116 04:17:51.855739 2448659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0116 04:17:51.855804 2448659 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0116 04:17:51.893559 2448659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 04:17:52.020006 2448659 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0116 04:17:52.020169 2448659 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0116 04:17:52.061710 2448659 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0116 04:17:52.085933 2448659 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0116 04:17:52.086004 2448659 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0116 04:17:52.086081 2448659 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0116 04:17:52.086110 2448659 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0116 04:17:52.086154 2448659 cache_images.go:92] LoadImages completed in 967.087775ms
	W0116 04:17:52.086217 2448659 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0116 04:17:52.086288 2448659 ssh_runner.go:195] Run: crio config
	I0116 04:17:52.170448 2448659 cni.go:84] Creating CNI manager for ""
	I0116 04:17:52.170471 2448659 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 04:17:52.170521 2448659 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 04:17:52.170546 2448659 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-865845 NodeName:ingress-addon-legacy-865845 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0116 04:17:52.170743 2448659 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-865845"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 04:17:52.170825 2448659 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-865845 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-865845 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 04:17:52.170896 2448659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0116 04:17:52.182344 2448659 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 04:17:52.182431 2448659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 04:17:52.193160 2448659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0116 04:17:52.215425 2448659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0116 04:17:52.238355 2448659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0116 04:17:52.259990 2448659 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0116 04:17:52.264562 2448659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:17:52.278440 2448659 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845 for IP: 192.168.49.2
	I0116 04:17:52.278517 2448659 certs.go:190] acquiring lock for shared ca certs: {Name:mkfc28b038850f5c4d343916ed6224daf2d0b70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:17:52.278697 2448659 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key
	I0116 04:17:52.278777 2448659 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key
	I0116 04:17:52.278858 2448659 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.key
	I0116 04:17:52.278900 2448659 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt with IP's: []
	I0116 04:17:52.830848 2448659 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt ...
	I0116 04:17:52.830881 2448659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: {Name:mkf9c235e588bc0f25ea37494e016c347fb1eb6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:17:52.831083 2448659 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.key ...
	I0116 04:17:52.831099 2448659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.key: {Name:mkb214a2ca4cfd7b5bf49e4204ec69cf0e0cd77d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:17:52.831186 2448659 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.key.dd3b5fb2
	I0116 04:17:52.831200 2448659 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 04:17:53.401283 2448659 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.crt.dd3b5fb2 ...
	I0116 04:17:53.401317 2448659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.crt.dd3b5fb2: {Name:mkdfdd0834a67d5efe1cc31321b21c33d8d3cb03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:17:53.401502 2448659 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.key.dd3b5fb2 ...
	I0116 04:17:53.401515 2448659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.key.dd3b5fb2: {Name:mk7b444f5b625003fb905aa370d466b436b7ad6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:17:53.401603 2448659 certs.go:337] copying /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.crt
	I0116 04:17:53.401685 2448659 certs.go:341] copying /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.key
	I0116 04:17:53.401755 2448659 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.key
	I0116 04:17:53.401775 2448659 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.crt with IP's: []
	I0116 04:17:54.225911 2448659 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.crt ...
	I0116 04:17:54.225946 2448659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.crt: {Name:mk670cf4c92ace62d9d17269fb3245bc97155063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:17:54.226137 2448659 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.key ...
	I0116 04:17:54.226151 2448659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.key: {Name:mk92925828b45f50fea24fecf174c08962a3bd79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:17:54.226225 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 04:17:54.226249 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 04:17:54.226262 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 04:17:54.226280 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 04:17:54.226295 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 04:17:54.226311 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 04:17:54.226328 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 04:17:54.226349 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 04:17:54.226408 2448659 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005.pem (1338 bytes)
	W0116 04:17:54.226454 2448659 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005_empty.pem, impossibly tiny 0 bytes
	I0116 04:17:54.226465 2448659 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 04:17:54.226492 2448659 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem (1078 bytes)
	I0116 04:17:54.226525 2448659 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem (1123 bytes)
	I0116 04:17:54.226555 2448659 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem (1679 bytes)
	I0116 04:17:54.226603 2448659 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem (1708 bytes)
	I0116 04:17:54.226637 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:17:54.226658 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005.pem -> /usr/share/ca-certificates/2421005.pem
	I0116 04:17:54.226675 2448659 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem -> /usr/share/ca-certificates/24210052.pem
	I0116 04:17:54.227270 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 04:17:54.258334 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 04:17:54.288312 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 04:17:54.317875 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 04:17:54.348521 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 04:17:54.378263 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 04:17:54.407421 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 04:17:54.436928 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 04:17:54.466836 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 04:17:54.496481 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005.pem --> /usr/share/ca-certificates/2421005.pem (1338 bytes)
	I0116 04:17:54.526252 2448659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem --> /usr/share/ca-certificates/24210052.pem (1708 bytes)
	I0116 04:17:54.556525 2448659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 04:17:54.578519 2448659 ssh_runner.go:195] Run: openssl version
	I0116 04:17:54.585587 2448659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 04:17:54.598012 2448659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:17:54.602995 2448659 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 04:06 /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:17:54.603062 2448659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:17:54.611987 2448659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 04:17:54.624786 2448659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2421005.pem && ln -fs /usr/share/ca-certificates/2421005.pem /etc/ssl/certs/2421005.pem"
	I0116 04:17:54.636642 2448659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2421005.pem
	I0116 04:17:54.641394 2448659 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 04:13 /usr/share/ca-certificates/2421005.pem
	I0116 04:17:54.641463 2448659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2421005.pem
	I0116 04:17:54.650405 2448659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2421005.pem /etc/ssl/certs/51391683.0"
	I0116 04:17:54.662292 2448659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24210052.pem && ln -fs /usr/share/ca-certificates/24210052.pem /etc/ssl/certs/24210052.pem"
	I0116 04:17:54.674333 2448659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24210052.pem
	I0116 04:17:54.679097 2448659 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 04:13 /usr/share/ca-certificates/24210052.pem
	I0116 04:17:54.679212 2448659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24210052.pem
	I0116 04:17:54.687869 2448659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24210052.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 04:17:54.699987 2448659 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 04:17:54.704487 2448659 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 04:17:54.704540 2448659 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-865845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-865845 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:17:54.704617 2448659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 04:17:54.704676 2448659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 04:17:54.746902 2448659 cri.go:89] found id: ""
	I0116 04:17:54.747020 2448659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 04:17:54.757750 2448659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 04:17:54.768570 2448659 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 04:17:54.768643 2448659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 04:17:54.779698 2448659 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 04:17:54.779745 2448659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 04:17:54.838035 2448659 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0116 04:17:54.838316 2448659 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 04:17:54.890445 2448659 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 04:17:54.890519 2448659 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0116 04:17:54.890558 2448659 kubeadm.go:322] OS: Linux
	I0116 04:17:54.890606 2448659 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 04:17:54.890656 2448659 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 04:17:54.890705 2448659 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 04:17:54.890762 2448659 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 04:17:54.890809 2448659 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 04:17:54.890865 2448659 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 04:17:54.992404 2448659 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 04:17:54.992516 2448659 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 04:17:54.992627 2448659 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 04:17:55.249735 2448659 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 04:17:55.251237 2448659 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 04:17:55.251438 2448659 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 04:17:55.357085 2448659 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 04:17:55.359889 2448659 out.go:204]   - Generating certificates and keys ...
	I0116 04:17:55.360068 2448659 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 04:17:55.360171 2448659 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 04:17:55.761358 2448659 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 04:17:56.291954 2448659 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 04:17:56.504298 2448659 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 04:17:56.742949 2448659 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 04:17:56.965820 2448659 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 04:17:56.966174 2448659 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-865845 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 04:17:57.613413 2448659 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 04:17:57.613803 2448659 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-865845 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0116 04:17:58.328241 2448659 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 04:17:59.178780 2448659 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 04:18:00.682352 2448659 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 04:18:00.682746 2448659 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 04:18:01.349903 2448659 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 04:18:01.599166 2448659 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 04:18:02.657993 2448659 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 04:18:02.925160 2448659 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 04:18:02.925247 2448659 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 04:18:02.927840 2448659 out.go:204]   - Booting up control plane ...
	I0116 04:18:02.927971 2448659 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 04:18:02.934454 2448659 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 04:18:02.936591 2448659 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 04:18:02.940312 2448659 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 04:18:02.940511 2448659 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 04:18:16.442097 2448659 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.502258 seconds
	I0116 04:18:16.442218 2448659 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 04:18:16.463647 2448659 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 04:18:16.979434 2448659 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 04:18:16.979584 2448659 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-865845 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0116 04:18:17.494291 2448659 kubeadm.go:322] [bootstrap-token] Using token: n8lxgp.izgsleqs9fffzzea
	I0116 04:18:17.496406 2448659 out.go:204]   - Configuring RBAC rules ...
	I0116 04:18:17.496531 2448659 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 04:18:17.504741 2448659 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 04:18:17.517173 2448659 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 04:18:17.520429 2448659 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 04:18:17.523352 2448659 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 04:18:17.526714 2448659 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 04:18:17.537271 2448659 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 04:18:17.773399 2448659 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 04:18:17.907900 2448659 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 04:18:17.909101 2448659 kubeadm.go:322] 
	I0116 04:18:17.909184 2448659 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 04:18:17.909201 2448659 kubeadm.go:322] 
	I0116 04:18:17.909287 2448659 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 04:18:17.909293 2448659 kubeadm.go:322] 
	I0116 04:18:17.909317 2448659 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 04:18:17.909382 2448659 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 04:18:17.909431 2448659 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 04:18:17.909450 2448659 kubeadm.go:322] 
	I0116 04:18:17.909511 2448659 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 04:18:17.909596 2448659 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 04:18:17.909665 2448659 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 04:18:17.909670 2448659 kubeadm.go:322] 
	I0116 04:18:17.909760 2448659 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 04:18:17.909839 2448659 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 04:18:17.909848 2448659 kubeadm.go:322] 
	I0116 04:18:17.909927 2448659 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token n8lxgp.izgsleqs9fffzzea \
	I0116 04:18:17.910027 2448659 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad \
	I0116 04:18:17.910053 2448659 kubeadm.go:322]     --control-plane 
	I0116 04:18:17.910058 2448659 kubeadm.go:322] 
	I0116 04:18:17.913759 2448659 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 04:18:17.913800 2448659 kubeadm.go:322] 
	I0116 04:18:17.913921 2448659 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token n8lxgp.izgsleqs9fffzzea \
	I0116 04:18:17.914081 2448659 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad 
	I0116 04:18:17.914316 2448659 kubeadm.go:322] W0116 04:17:54.837204    1223 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0116 04:18:17.914661 2448659 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 04:18:17.914799 2448659 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 04:18:17.914952 2448659 kubeadm.go:322] W0116 04:18:02.934098    1223 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 04:18:17.915121 2448659 kubeadm.go:322] W0116 04:18:02.935716    1223 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0116 04:18:17.915141 2448659 cni.go:84] Creating CNI manager for ""
	I0116 04:18:17.915151 2448659 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 04:18:17.917166 2448659 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 04:18:17.919176 2448659 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 04:18:17.924561 2448659 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0116 04:18:17.924583 2448659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 04:18:17.947721 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 04:18:18.412628 2448659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 04:18:18.412795 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:18.412868 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=ingress-addon-legacy-865845 minikube.k8s.io/updated_at=2024_01_16T04_18_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:18.431069 2448659 ops.go:34] apiserver oom_adj: -16
	I0116 04:18:18.570144 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:19.071050 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:19.571270 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:20.070994 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:20.571184 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:21.070965 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:21.570483 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:22.071029 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:22.570518 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:23.070845 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:23.570308 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:24.071216 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:24.571251 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:25.070273 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:25.570761 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:26.071227 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:26.570613 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:27.070312 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:27.570702 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:28.070576 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:28.570501 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:29.070445 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:29.570514 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:30.070288 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:30.570270 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:31.070930 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:31.570961 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:32.070601 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:32.570695 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:33.070333 2448659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:18:33.204651 2448659 kubeadm.go:1088] duration metric: took 14.791933566s to wait for elevateKubeSystemPrivileges.
	I0116 04:18:33.204687 2448659 kubeadm.go:406] StartCluster complete in 38.500152517s
	I0116 04:18:33.204706 2448659 settings.go:142] acquiring lock: {Name:mk66adae4842b25a93c5566bbfd72e0abd3ff5ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:18:33.204787 2448659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:18:33.205461 2448659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/kubeconfig: {Name:mk62b61676cf27f7a957a454bc1b05d015789bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:18:33.206160 2448659 kapi.go:59] client config for ingress-addon-legacy-865845: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.key", CAFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 04:18:33.207267 2448659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 04:18:33.207508 2448659 config.go:182] Loaded profile config "ingress-addon-legacy-865845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0116 04:18:33.207537 2448659 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 04:18:33.207590 2448659 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-865845"
	I0116 04:18:33.207603 2448659 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-865845"
	I0116 04:18:33.207656 2448659 host.go:66] Checking if "ingress-addon-legacy-865845" exists ...
	I0116 04:18:33.208101 2448659 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865845 --format={{.State.Status}}
	I0116 04:18:33.208687 2448659 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 04:18:33.208716 2448659 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-865845"
	I0116 04:18:33.208732 2448659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-865845"
	I0116 04:18:33.209031 2448659 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865845 --format={{.State.Status}}
	I0116 04:18:33.264419 2448659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 04:18:33.269515 2448659 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 04:18:33.269537 2448659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 04:18:33.269597 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:18:33.267078 2448659 kapi.go:59] client config for ingress-addon-legacy-865845: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.key", CAFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 04:18:33.270097 2448659 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-865845"
	I0116 04:18:33.270127 2448659 host.go:66] Checking if "ingress-addon-legacy-865845" exists ...
	I0116 04:18:33.270600 2448659 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-865845 --format={{.State.Status}}
	I0116 04:18:33.307268 2448659 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 04:18:33.307296 2448659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 04:18:33.307358 2448659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-865845
	I0116 04:18:33.308898 2448659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35331 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa Username:docker}
	I0116 04:18:33.347013 2448659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35331 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/ingress-addon-legacy-865845/id_rsa Username:docker}
	I0116 04:18:33.477874 2448659 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 04:18:33.508355 2448659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 04:18:33.557433 2448659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 04:18:33.840330 2448659 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-865845" context rescaled to 1 replicas
	I0116 04:18:33.840415 2448659 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 04:18:33.843056 2448659 out.go:177] * Verifying Kubernetes components...
	I0116 04:18:33.845727 2448659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:18:33.969205 2448659 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0116 04:18:34.085294 2448659 kapi.go:59] client config for ingress-addon-legacy-865845: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.key", CAFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 04:18:34.085573 2448659 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-865845" to be "Ready" ...
	I0116 04:18:34.096014 2448659 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 04:18:34.097713 2448659 addons.go:505] enable addons completed in 890.164221ms: enabled=[storage-provisioner default-storageclass]
	I0116 04:18:36.088561 2448659 node_ready.go:58] node "ingress-addon-legacy-865845" has status "Ready":"False"
	I0116 04:18:38.089046 2448659 node_ready.go:58] node "ingress-addon-legacy-865845" has status "Ready":"False"
	I0116 04:18:40.089643 2448659 node_ready.go:58] node "ingress-addon-legacy-865845" has status "Ready":"False"
	I0116 04:18:41.589354 2448659 node_ready.go:49] node "ingress-addon-legacy-865845" has status "Ready":"True"
	I0116 04:18:41.589386 2448659 node_ready.go:38] duration metric: took 7.503793071s waiting for node "ingress-addon-legacy-865845" to be "Ready" ...
	I0116 04:18:41.589397 2448659 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 04:18:41.596741 2448659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-shnww" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:43.600092 2448659 pod_ready.go:102] pod "coredns-66bff467f8-shnww" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 04:18:32 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0116 04:18:45.600193 2448659 pod_ready.go:102] pod "coredns-66bff467f8-shnww" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-16 04:18:32 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0116 04:18:47.602554 2448659 pod_ready.go:102] pod "coredns-66bff467f8-shnww" in "kube-system" namespace has status "Ready":"False"
	I0116 04:18:49.609043 2448659 pod_ready.go:102] pod "coredns-66bff467f8-shnww" in "kube-system" namespace has status "Ready":"False"
	I0116 04:18:50.103297 2448659 pod_ready.go:92] pod "coredns-66bff467f8-shnww" in "kube-system" namespace has status "Ready":"True"
	I0116 04:18:50.103328 2448659 pod_ready.go:81] duration metric: took 8.50650386s waiting for pod "coredns-66bff467f8-shnww" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.103341 2448659 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-865845" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.108371 2448659 pod_ready.go:92] pod "etcd-ingress-addon-legacy-865845" in "kube-system" namespace has status "Ready":"True"
	I0116 04:18:50.108398 2448659 pod_ready.go:81] duration metric: took 5.050128ms waiting for pod "etcd-ingress-addon-legacy-865845" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.108413 2448659 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-865845" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.114745 2448659 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-865845" in "kube-system" namespace has status "Ready":"True"
	I0116 04:18:50.114773 2448659 pod_ready.go:81] duration metric: took 6.351864ms waiting for pod "kube-apiserver-ingress-addon-legacy-865845" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.114786 2448659 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-865845" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.120198 2448659 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-865845" in "kube-system" namespace has status "Ready":"True"
	I0116 04:18:50.120228 2448659 pod_ready.go:81] duration metric: took 5.422516ms waiting for pod "kube-controller-manager-ingress-addon-legacy-865845" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.120245 2448659 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bfvrx" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.125733 2448659 pod_ready.go:92] pod "kube-proxy-bfvrx" in "kube-system" namespace has status "Ready":"True"
	I0116 04:18:50.125765 2448659 pod_ready.go:81] duration metric: took 5.508652ms waiting for pod "kube-proxy-bfvrx" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.125778 2448659 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-865845" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.298202 2448659 request.go:629] Waited for 172.320336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-865845
	I0116 04:18:50.498359 2448659 request.go:629] Waited for 197.413722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-865845
	I0116 04:18:50.501473 2448659 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-865845" in "kube-system" namespace has status "Ready":"True"
	I0116 04:18:50.501498 2448659 pod_ready.go:81] duration metric: took 375.710882ms waiting for pod "kube-scheduler-ingress-addon-legacy-865845" in "kube-system" namespace to be "Ready" ...
	I0116 04:18:50.501513 2448659 pod_ready.go:38] duration metric: took 8.912104781s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 04:18:50.501549 2448659 api_server.go:52] waiting for apiserver process to appear ...
	I0116 04:18:50.501634 2448659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 04:18:50.516157 2448659 api_server.go:72] duration metric: took 16.675662148s to wait for apiserver process to appear ...
	I0116 04:18:50.516183 2448659 api_server.go:88] waiting for apiserver healthz status ...
	I0116 04:18:50.516203 2448659 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0116 04:18:50.525510 2448659 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0116 04:18:50.526367 2448659 api_server.go:141] control plane version: v1.18.20
	I0116 04:18:50.526392 2448659 api_server.go:131] duration metric: took 10.201292ms to wait for apiserver health ...
	I0116 04:18:50.526402 2448659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 04:18:50.697646 2448659 request.go:629] Waited for 171.179147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0116 04:18:50.703677 2448659 system_pods.go:59] 8 kube-system pods found
	I0116 04:18:50.703711 2448659 system_pods.go:61] "coredns-66bff467f8-shnww" [189408de-c92f-4939-9cc0-88b2e342e8f2] Running
	I0116 04:18:50.703718 2448659 system_pods.go:61] "etcd-ingress-addon-legacy-865845" [f8d8cdd9-4815-4df0-8839-893fc890e9ed] Running
	I0116 04:18:50.703723 2448659 system_pods.go:61] "kindnet-q46r6" [ffe1975b-0720-4c74-ba53-1a05b84e0392] Running
	I0116 04:18:50.703728 2448659 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-865845" [d2d1f502-4bcb-4299-afb8-8a823d0a192d] Running
	I0116 04:18:50.703734 2448659 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-865845" [23659432-2def-4ab5-97ae-e94e8249faa6] Running
	I0116 04:18:50.703741 2448659 system_pods.go:61] "kube-proxy-bfvrx" [74ba27ec-223d-4295-a4a8-0127d5484d4f] Running
	I0116 04:18:50.703747 2448659 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-865845" [106beb95-831a-4f51-97d9-7f5c43d251fc] Running
	I0116 04:18:50.703759 2448659 system_pods.go:61] "storage-provisioner" [8ad914e0-3f24-426f-bae7-6b5876b168e0] Running
	I0116 04:18:50.703766 2448659 system_pods.go:74] duration metric: took 177.357566ms to wait for pod list to return data ...
	I0116 04:18:50.703779 2448659 default_sa.go:34] waiting for default service account to be created ...
	I0116 04:18:50.898159 2448659 request.go:629] Waited for 194.303269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 04:18:50.900699 2448659 default_sa.go:45] found service account: "default"
	I0116 04:18:50.900784 2448659 default_sa.go:55] duration metric: took 196.99628ms for default service account to be created ...
	I0116 04:18:50.900801 2448659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 04:18:51.098262 2448659 request.go:629] Waited for 197.371852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0116 04:18:51.104955 2448659 system_pods.go:86] 8 kube-system pods found
	I0116 04:18:51.104989 2448659 system_pods.go:89] "coredns-66bff467f8-shnww" [189408de-c92f-4939-9cc0-88b2e342e8f2] Running
	I0116 04:18:51.104997 2448659 system_pods.go:89] "etcd-ingress-addon-legacy-865845" [f8d8cdd9-4815-4df0-8839-893fc890e9ed] Running
	I0116 04:18:51.105005 2448659 system_pods.go:89] "kindnet-q46r6" [ffe1975b-0720-4c74-ba53-1a05b84e0392] Running
	I0116 04:18:51.105010 2448659 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-865845" [d2d1f502-4bcb-4299-afb8-8a823d0a192d] Running
	I0116 04:18:51.105015 2448659 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-865845" [23659432-2def-4ab5-97ae-e94e8249faa6] Running
	I0116 04:18:51.105021 2448659 system_pods.go:89] "kube-proxy-bfvrx" [74ba27ec-223d-4295-a4a8-0127d5484d4f] Running
	I0116 04:18:51.105026 2448659 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-865845" [106beb95-831a-4f51-97d9-7f5c43d251fc] Running
	I0116 04:18:51.105031 2448659 system_pods.go:89] "storage-provisioner" [8ad914e0-3f24-426f-bae7-6b5876b168e0] Running
	I0116 04:18:51.105038 2448659 system_pods.go:126] duration metric: took 204.231698ms to wait for k8s-apps to be running ...
	I0116 04:18:51.105050 2448659 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 04:18:51.105121 2448659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:18:51.120921 2448659 system_svc.go:56] duration metric: took 15.856705ms WaitForService to wait for kubelet.
	I0116 04:18:51.120959 2448659 kubeadm.go:581] duration metric: took 17.280472479s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 04:18:51.120980 2448659 node_conditions.go:102] verifying NodePressure condition ...
	I0116 04:18:51.298400 2448659 request.go:629] Waited for 177.331072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0116 04:18:51.301476 2448659 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 04:18:51.301511 2448659 node_conditions.go:123] node cpu capacity is 2
	I0116 04:18:51.301525 2448659 node_conditions.go:105] duration metric: took 180.540148ms to run NodePressure ...
	I0116 04:18:51.301556 2448659 start.go:228] waiting for startup goroutines ...
	I0116 04:18:51.301568 2448659 start.go:233] waiting for cluster config update ...
	I0116 04:18:51.301578 2448659 start.go:242] writing updated cluster config ...
	I0116 04:18:51.301874 2448659 ssh_runner.go:195] Run: rm -f paused
	I0116 04:18:51.375928 2448659 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0116 04:18:51.379204 2448659 out.go:177] 
	W0116 04:18:51.381962 2448659 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0116 04:18:51.384477 2448659 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0116 04:18:51.386492 2448659 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-865845" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 16 04:21:51 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:51.432151308Z" level=info msg="Created container 28506180dfd7d8f1dd7e59738cce2a1c9b5dd36273f6a0cf62d5ff1995a72ad4: default/hello-world-app-5f5d8b66bb-pk96v/hello-world-app" id=5a3829ef-9595-4fdb-a5ed-1539d73c0b9d name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jan 16 04:21:51 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:51.433045555Z" level=info msg="Starting container: 28506180dfd7d8f1dd7e59738cce2a1c9b5dd36273f6a0cf62d5ff1995a72ad4" id=a082aa8c-1756-4842-938f-3dae1721e919 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jan 16 04:21:51 ingress-addon-legacy-865845 conmon[3599]: conmon 28506180dfd7d8f1dd7e <ninfo>: container 3612 exited with status 1
	Jan 16 04:21:51 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:51.480475079Z" level=info msg="Started container" PID=3612 containerID=28506180dfd7d8f1dd7e59738cce2a1c9b5dd36273f6a0cf62d5ff1995a72ad4 description=default/hello-world-app-5f5d8b66bb-pk96v/hello-world-app id=a082aa8c-1756-4842-938f-3dae1721e919 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=a5c85e9f49fd8f91492301c1783cfd63ec1c9295ab0db8308df7f9f223299a35
	Jan 16 04:21:52 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:52.070117516Z" level=info msg="Removing container: a80e406327ddd73b9878f34b97bdb6ed4d4dadba95ca3c12f2ee81f06a97179e" id=040ce963-6dad-4006-ac19-28793992656b name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 04:21:52 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:52.097034946Z" level=info msg="Removed container a80e406327ddd73b9878f34b97bdb6ed4d4dadba95ca3c12f2ee81f06a97179e: default/hello-world-app-5f5d8b66bb-pk96v/hello-world-app" id=040ce963-6dad-4006-ac19-28793992656b name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jan 16 04:21:52 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:52.312941777Z" level=info msg="Stopping container: c1fb4f759196f93acd89af600c85d5d5b4f5f9d9747c61f2632d2a9a424f4e73 (timeout: 2s)" id=b05a134f-c297-436e-9181-b4f12fdca912 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 16 04:21:52 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:52.332955612Z" level=info msg="Stopping container: c1fb4f759196f93acd89af600c85d5d5b4f5f9d9747c61f2632d2a9a424f4e73 (timeout: 2s)" id=ef79458f-2c19-4cca-bbf4-791ac680ada2 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 16 04:21:53 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:53.312734116Z" level=info msg="Stopping pod sandbox: 885aa001d780ece206f9a684d2e1bb9504b8ca80e8cb44ce5c95b2ad757d21c7" id=19cc007d-5a3c-46f4-aaf5-6e9a9254e8b5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 04:21:53 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:53.312795358Z" level=info msg="Stopped pod sandbox (already stopped): 885aa001d780ece206f9a684d2e1bb9504b8ca80e8cb44ce5c95b2ad757d21c7" id=19cc007d-5a3c-46f4-aaf5-6e9a9254e8b5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.324854216Z" level=warning msg="Stopping container c1fb4f759196f93acd89af600c85d5d5b4f5f9d9747c61f2632d2a9a424f4e73 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=b05a134f-c297-436e-9181-b4f12fdca912 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 16 04:21:54 ingress-addon-legacy-865845 conmon[2710]: conmon c1fb4f759196f93acd89 <ninfo>: container 2721 exited with status 137
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.489153553Z" level=info msg="Stopped container c1fb4f759196f93acd89af600c85d5d5b4f5f9d9747c61f2632d2a9a424f4e73: ingress-nginx/ingress-nginx-controller-7fcf777cb7-ft2kb/controller" id=ef79458f-2c19-4cca-bbf4-791ac680ada2 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.491367019Z" level=info msg="Stopped container c1fb4f759196f93acd89af600c85d5d5b4f5f9d9747c61f2632d2a9a424f4e73: ingress-nginx/ingress-nginx-controller-7fcf777cb7-ft2kb/controller" id=b05a134f-c297-436e-9181-b4f12fdca912 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.491908823Z" level=info msg="Stopping pod sandbox: 9bb1f5e34887b3dadb43d06206eee5c20518dcc332bcc4c83aade46b75d70cb6" id=e374dcdf-036f-459a-af3a-cffc070b5f1b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.495358040Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-SQYFMDLQDGWKLUN4 - [0:0]\n:KUBE-HP-CIW4RBKUVSGHTBE5 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-CIW4RBKUVSGHTBE5\n-X KUBE-HP-SQYFMDLQDGWKLUN4\nCOMMIT\n"
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.501281155Z" level=info msg="Stopping pod sandbox: 9bb1f5e34887b3dadb43d06206eee5c20518dcc332bcc4c83aade46b75d70cb6" id=d9f8407e-48fa-4a17-a1c6-9160c2163ce5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.501973897Z" level=info msg="Closing host port tcp:80"
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.502029321Z" level=info msg="Closing host port tcp:443"
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.503314064Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.503344955Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.503498239Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-ft2kb Namespace:ingress-nginx ID:9bb1f5e34887b3dadb43d06206eee5c20518dcc332bcc4c83aade46b75d70cb6 UID:bb6e4811-2b16-44b8-838c-474f693dbe6f NetNS:/var/run/netns/d1a97205-3f8a-49a7-86ca-34574e71b7c4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.503641079Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-ft2kb from CNI network \"kindnet\" (type=ptp)"
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.524197357Z" level=info msg="Stopped pod sandbox: 9bb1f5e34887b3dadb43d06206eee5c20518dcc332bcc4c83aade46b75d70cb6" id=e374dcdf-036f-459a-af3a-cffc070b5f1b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jan 16 04:21:54 ingress-addon-legacy-865845 crio[892]: time="2024-01-16 04:21:54.524309461Z" level=info msg="Stopped pod sandbox (already stopped): 9bb1f5e34887b3dadb43d06206eee5c20518dcc332bcc4c83aade46b75d70cb6" id=d9f8407e-48fa-4a17-a1c6-9160c2163ce5 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	28506180dfd7d       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   8 seconds ago       Exited              hello-world-app           2                   a5c85e9f49fd8       hello-world-app-5f5d8b66bb-pk96v
	34e4cf14d0d7c       docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb                    2 minutes ago       Running             nginx                     0                   375a7ba1628fe       nginx
	c1fb4f759196f       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   9bb1f5e34887b       ingress-nginx-controller-7fcf777cb7-ft2kb
	bf4755022db49       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   1ee72d445cee0       ingress-nginx-admission-patch-gntfg
	9fb5790d5210f       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   5eb5057da8f0b       ingress-nginx-admission-create-qhlmv
	eae44963f85d8       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   1d89b88f1dcaf       storage-provisioner
	9c36d25bd62e2       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   3fe4f5c051e11       coredns-66bff467f8-shnww
	d76b6e5f67093       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   aad39e8cac60d       kindnet-q46r6
	c8d22a9622b53       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   3914b3b4ad707       kube-proxy-bfvrx
	58d49aff392c7       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   0020c3cac9f33       kube-apiserver-ingress-addon-legacy-865845
	cfe60cea30672       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   f4d2985b11721       etcd-ingress-addon-legacy-865845
	02e8129c6e834       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   4bb39475265ae       kube-scheduler-ingress-addon-legacy-865845
	85101062d8b17       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   fd8f2379ea4e5       kube-controller-manager-ingress-addon-legacy-865845
	
	
	==> coredns [9c36d25bd62e24d90a838e064dc508598801b8b04530d83b9bd203e081f7b79a] <==
	[INFO] 10.244.0.5:41583 - 13412 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000159339s
	[INFO] 10.244.0.5:48225 - 19797 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002971436s
	[INFO] 10.244.0.5:41583 - 9747 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001812048s
	[INFO] 10.244.0.5:48225 - 17321 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001840462s
	[INFO] 10.244.0.5:41583 - 10786 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00195666s
	[INFO] 10.244.0.5:41583 - 64040 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000144358s
	[INFO] 10.244.0.5:48225 - 201 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000098155s
	[INFO] 10.244.0.5:35041 - 49989 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082484s
	[INFO] 10.244.0.5:46601 - 30544 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043281s
	[INFO] 10.244.0.5:35041 - 53239 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000039761s
	[INFO] 10.244.0.5:35041 - 31954 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034198s
	[INFO] 10.244.0.5:35041 - 8782 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034985s
	[INFO] 10.244.0.5:35041 - 3546 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034075s
	[INFO] 10.244.0.5:35041 - 43389 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035265s
	[INFO] 10.244.0.5:46601 - 42816 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000038301s
	[INFO] 10.244.0.5:46601 - 64044 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036996s
	[INFO] 10.244.0.5:46601 - 14815 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050592s
	[INFO] 10.244.0.5:46601 - 37715 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034535s
	[INFO] 10.244.0.5:35041 - 15819 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001144315s
	[INFO] 10.244.0.5:46601 - 25482 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056622s
	[INFO] 10.244.0.5:35041 - 56432 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001257576s
	[INFO] 10.244.0.5:46601 - 60353 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000956005s
	[INFO] 10.244.0.5:35041 - 44495 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000039276s
	[INFO] 10.244.0.5:46601 - 21268 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000877516s
	[INFO] 10.244.0.5:46601 - 50445 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038202s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-865845
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-865845
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=ingress-addon-legacy-865845
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T04_18_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 04:18:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-865845
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 04:21:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 04:21:51 +0000   Tue, 16 Jan 2024 04:18:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 04:21:51 +0000   Tue, 16 Jan 2024 04:18:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 04:21:51 +0000   Tue, 16 Jan 2024 04:18:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 04:21:51 +0000   Tue, 16 Jan 2024 04:18:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-865845
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 0773acf1a31f4c0e8bc73109eab9aa59
	  System UUID:                89072d5c-b430-42bc-9908-28cb1f18b8f6
	  Boot ID:                    3a165b82-f13d-4880-a2c5-3d4f8ff28eca
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-pk96v                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-shnww                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m28s
	  kube-system                 etcd-ingress-addon-legacy-865845                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kindnet-q46r6                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m28s
	  kube-system                 kube-apiserver-ingress-addon-legacy-865845             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-865845    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-proxy-bfvrx                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-scheduler-ingress-addon-legacy-865845             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m54s (x4 over 3m55s)  kubelet     Node ingress-addon-legacy-865845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x5 over 3m55s)  kubelet     Node ingress-addon-legacy-865845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x4 over 3m55s)  kubelet     Node ingress-addon-legacy-865845 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s                  kubelet     Node ingress-addon-legacy-865845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s                  kubelet     Node ingress-addon-legacy-865845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s                  kubelet     Node ingress-addon-legacy-865845 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m26s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m19s                  kubelet     Node ingress-addon-legacy-865845 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001333] FS-Cache: O-key=[8] 'eb693b0000000000'
	[  +0.000818] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.001133] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=0000000029c1254e
	[  +0.001369] FS-Cache: N-key=[8] 'eb693b0000000000'
	[  +0.005424] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001138] FS-Cache: O-cookie d=00000000b2a3e576{9p.inode} n=00000000c01c346d
	[  +0.001178] FS-Cache: O-key=[8] 'eb693b0000000000'
	[  +0.000799] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=00000000fbbfa844
	[  +0.001199] FS-Cache: N-key=[8] 'eb693b0000000000'
	[  +2.236228] FS-Cache: Duplicate cookie detected
	[  +0.001025] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001453] FS-Cache: O-cookie d=00000000b2a3e576{9p.inode} n=000000005b84793c
	[  +0.001281] FS-Cache: O-key=[8] 'ea693b0000000000'
	[  +0.000853] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.001156] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=000000001c0ef5b4
	[  +0.001206] FS-Cache: N-key=[8] 'ea693b0000000000'
	[  +0.506033] FS-Cache: Duplicate cookie detected
	[  +0.000873] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001071] FS-Cache: O-cookie d=00000000b2a3e576{9p.inode} n=000000006abd7985
	[  +0.001299] FS-Cache: O-key=[8] 'f0693b0000000000'
	[  +0.000818] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.001078] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=000000001298e7f4
	[  +0.001270] FS-Cache: N-key=[8] 'f0693b0000000000'
	
	
	==> etcd [cfe60cea306721e07706357ec660117065e3be92781a48a9fe1197db440d63e5] <==
	raft2024/01/16 04:18:10 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/16 04:18:10 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/16 04:18:10 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/16 04:18:10 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-16 04:18:10.101882 W | auth: simple token is not cryptographically signed
	2024-01-16 04:18:10.105222 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-16 04:18:10.109017 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/16 04:18:10 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-16 04:18:10.110226 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-16 04:18:10.110556 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-16 04:18:10.110597 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-16 04:18:10.110681 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2024/01/16 04:18:10 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/16 04:18:10 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/16 04:18:10 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/16 04:18:10 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/16 04:18:10 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-16 04:18:10.594280 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-16 04:18:10.595244 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-16 04:18:10.595348 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-16 04:18:10.595412 I | etcdserver: published {Name:ingress-addon-legacy-865845 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-16 04:18:10.596784 I | embed: ready to serve client requests
	2024-01-16 04:18:10.598265 I | embed: serving client requests on 192.168.49.2:2379
	2024-01-16 04:18:10.610638 I | embed: ready to serve client requests
	2024-01-16 04:18:10.612020 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 04:22:00 up 11:04,  0 users,  load average: 1.23, 1.33, 1.96
	Linux ingress-addon-legacy-865845 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [d76b6e5f67093782ee2a60dff2bd52556a0a653f88cc20baa863ee40d24e18ef] <==
	I0116 04:19:55.978226       1 main.go:227] handling current node
	I0116 04:20:05.985883       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:20:05.985916       1 main.go:227] handling current node
	I0116 04:20:15.989122       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:20:15.989149       1 main.go:227] handling current node
	I0116 04:20:25.999826       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:20:25.999859       1 main.go:227] handling current node
	I0116 04:20:36.007960       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:20:36.007997       1 main.go:227] handling current node
	I0116 04:20:46.015967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:20:46.015999       1 main.go:227] handling current node
	I0116 04:20:56.025912       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:20:56.025946       1 main.go:227] handling current node
	I0116 04:21:06.029447       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:21:06.029481       1 main.go:227] handling current node
	I0116 04:21:16.038939       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:21:16.038967       1 main.go:227] handling current node
	I0116 04:21:26.048638       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:21:26.048673       1 main.go:227] handling current node
	I0116 04:21:36.062476       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:21:36.062507       1 main.go:227] handling current node
	I0116 04:21:46.073903       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:21:46.073935       1 main.go:227] handling current node
	I0116 04:21:56.081624       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0116 04:21:56.081654       1 main.go:227] handling current node
	
	
	==> kube-apiserver [58d49aff392c7cfd53d182101fa00d88e5963465e9f400d64e8bca13e3f23b6d] <==
	I0116 04:18:14.974640       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0116 04:18:14.977632       1 cache.go:39] Caches are synced for autoregister controller
	I0116 04:18:14.978999       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 04:18:14.984064       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0116 04:18:14.997591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 04:18:15.772693       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0116 04:18:15.772724       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0116 04:18:15.778698       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0116 04:18:15.782860       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0116 04:18:15.782883       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0116 04:18:16.174126       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 04:18:16.224779       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0116 04:18:16.319863       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0116 04:18:16.320911       1 controller.go:609] quota admission added evaluator for: endpoints
	I0116 04:18:16.324694       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 04:18:17.217112       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0116 04:18:17.759201       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0116 04:18:17.896052       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0116 04:18:21.163149       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 04:18:32.718297       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0116 04:18:32.777301       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0116 04:18:52.316669       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0116 04:19:13.198668       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0116 04:21:52.329479       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E0116 04:21:53.641138       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [85101062d8b17f970c0afd1fe18aec256ad6b4aba468f5e3357869925060ba8d] <==
	E0116 04:18:32.823044       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0116 04:18:32.852375       1 shared_informer.go:230] Caches are synced for service account 
	I0116 04:18:32.852679       1 shared_informer.go:230] Caches are synced for namespace 
	E0116 04:18:32.863386       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0116 04:18:32.972190       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0116 04:18:33.014475       1 shared_informer.go:230] Caches are synced for attach detach 
	I0116 04:18:33.109254       1 shared_informer.go:230] Caches are synced for job 
	I0116 04:18:33.126642       1 shared_informer.go:230] Caches are synced for endpoint 
	I0116 04:18:33.163645       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0116 04:18:33.368552       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 04:18:33.377943       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 04:18:33.418680       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0116 04:18:33.418709       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0116 04:18:33.425421       1 shared_informer.go:230] Caches are synced for resource quota 
	I0116 04:18:33.469988       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"82c8f04d-8d11-46ff-9712-e3e7858bdb9a", APIVersion:"apps/v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0116 04:18:33.695845       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f88d04d2-1431-4430-beb4-9f1c63c79e1a", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-vmvgc
	I0116 04:18:42.714288       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0116 04:18:52.295215       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a4adf565-d8d7-420c-986a-07c63ff4cab7", APIVersion:"apps/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0116 04:18:52.325898       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"a0018da1-af64-48fd-b317-e3eb7f16cd9d", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-ft2kb
	I0116 04:18:52.337747       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"12471468-e74a-41d2-8eb8-de65c080b4d3", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-qhlmv
	I0116 04:18:52.388684       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"7d540dcd-d59e-4056-9953-e2916b7458db", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-gntfg
	I0116 04:18:54.684252       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"12471468-e74a-41d2-8eb8-de65c080b4d3", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 04:18:55.677260       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"7d540dcd-d59e-4056-9953-e2916b7458db", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0116 04:21:34.152373       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"95360cbb-8a5d-4596-82cd-710138a6bdf4", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0116 04:21:34.187337       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"8503afc0-c13c-4954-82aa-06e38f3ac1a0", APIVersion:"apps/v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-pk96v
	
	
	==> kube-proxy [c8d22a9622b53d7f6dbe00fb1da33ae4a162238af624c0ad2866956d6edee756] <==
	W0116 04:18:34.158919       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0116 04:18:34.172119       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0116 04:18:34.172245       1 server_others.go:186] Using iptables Proxier.
	I0116 04:18:34.172634       1 server.go:583] Version: v1.18.20
	I0116 04:18:34.175651       1 config.go:315] Starting service config controller
	I0116 04:18:34.175753       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0116 04:18:34.181832       1 config.go:133] Starting endpoints config controller
	I0116 04:18:34.181913       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0116 04:18:34.289581       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0116 04:18:34.289680       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [02e8129c6e834fa2cedfc987bd27c0228e65cac2d3cdd3a7f4e0f3bca32c5b97] <==
	I0116 04:18:14.980480       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 04:18:14.980506       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0116 04:18:14.982660       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0116 04:18:14.982782       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 04:18:14.982796       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 04:18:14.982821       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0116 04:18:14.988937       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 04:18:15.005883       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 04:18:15.006016       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 04:18:15.006209       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 04:18:15.014862       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 04:18:15.016020       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 04:18:15.016230       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 04:18:15.016345       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 04:18:15.016442       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 04:18:15.016534       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 04:18:15.016618       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 04:18:15.016710       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 04:18:15.836382       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 04:18:15.918682       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 04:18:16.006490       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 04:18:16.051084       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0116 04:18:16.582932       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0116 04:18:32.832058       1 factory.go:503] pod: kube-system/coredns-66bff467f8-vmvgc is already present in the active queue
	E0116 04:18:32.864849       1 factory.go:503] pod: kube-system/coredns-66bff467f8-shnww is already present in the active queue
	
	
	==> kubelet <==
	Jan 16 04:21:39 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:39.041749    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a80e406327ddd73b9878f34b97bdb6ed4d4dadba95ca3c12f2ee81f06a97179e
	Jan 16 04:21:39 ingress-addon-legacy-865845 kubelet[1630]: E0116 04:21:39.042027    1630 pod_workers.go:191] Error syncing pod 6fe242a4-9eb6-4362-85f1-9ffabac16144 ("hello-world-app-5f5d8b66bb-pk96v_default(6fe242a4-9eb6-4362-85f1-9ffabac16144)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-pk96v_default(6fe242a4-9eb6-4362-85f1-9ffabac16144)"
	Jan 16 04:21:40 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:40.045710    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a80e406327ddd73b9878f34b97bdb6ed4d4dadba95ca3c12f2ee81f06a97179e
	Jan 16 04:21:40 ingress-addon-legacy-865845 kubelet[1630]: E0116 04:21:40.045978    1630 pod_workers.go:191] Error syncing pod 6fe242a4-9eb6-4362-85f1-9ffabac16144 ("hello-world-app-5f5d8b66bb-pk96v_default(6fe242a4-9eb6-4362-85f1-9ffabac16144)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-pk96v_default(6fe242a4-9eb6-4362-85f1-9ffabac16144)"
	Jan 16 04:21:48 ingress-addon-legacy-865845 kubelet[1630]: E0116 04:21:48.313423    1630 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 16 04:21:48 ingress-addon-legacy-865845 kubelet[1630]: E0116 04:21:48.313461    1630 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 16 04:21:48 ingress-addon-legacy-865845 kubelet[1630]: E0116 04:21:48.313506    1630 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jan 16 04:21:48 ingress-addon-legacy-865845 kubelet[1630]: E0116 04:21:48.313539    1630 pod_workers.go:191] Error syncing pod 41ace65f-4737-4756-85d8-75d24b394fee ("kube-ingress-dns-minikube_kube-system(41ace65f-4737-4756-85d8-75d24b394fee)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jan 16 04:21:50 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:50.201078    1630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-vr6tw" (UniqueName: "kubernetes.io/secret/41ace65f-4737-4756-85d8-75d24b394fee-minikube-ingress-dns-token-vr6tw") pod "41ace65f-4737-4756-85d8-75d24b394fee" (UID: "41ace65f-4737-4756-85d8-75d24b394fee")
	Jan 16 04:21:50 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:50.205871    1630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ace65f-4737-4756-85d8-75d24b394fee-minikube-ingress-dns-token-vr6tw" (OuterVolumeSpecName: "minikube-ingress-dns-token-vr6tw") pod "41ace65f-4737-4756-85d8-75d24b394fee" (UID: "41ace65f-4737-4756-85d8-75d24b394fee"). InnerVolumeSpecName "minikube-ingress-dns-token-vr6tw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 04:21:50 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:50.301317    1630 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-vr6tw" (UniqueName: "kubernetes.io/secret/41ace65f-4737-4756-85d8-75d24b394fee-minikube-ingress-dns-token-vr6tw") on node "ingress-addon-legacy-865845" DevicePath ""
	Jan 16 04:21:51 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:51.313325    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a80e406327ddd73b9878f34b97bdb6ed4d4dadba95ca3c12f2ee81f06a97179e
	Jan 16 04:21:52 ingress-addon-legacy-865845 kubelet[1630]: W0116 04:21:52.066294    1630 pod_container_deletor.go:77] Container "885aa001d780ece206f9a684d2e1bb9504b8ca80e8cb44ce5c95b2ad757d21c7" not found in pod's containers
	Jan 16 04:21:52 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:52.068098    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a80e406327ddd73b9878f34b97bdb6ed4d4dadba95ca3c12f2ee81f06a97179e
	Jan 16 04:21:52 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:52.068334    1630 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 28506180dfd7d8f1dd7e59738cce2a1c9b5dd36273f6a0cf62d5ff1995a72ad4
	Jan 16 04:21:52 ingress-addon-legacy-865845 kubelet[1630]: E0116 04:21:52.068565    1630 pod_workers.go:191] Error syncing pod 6fe242a4-9eb6-4362-85f1-9ffabac16144 ("hello-world-app-5f5d8b66bb-pk96v_default(6fe242a4-9eb6-4362-85f1-9ffabac16144)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-pk96v_default(6fe242a4-9eb6-4362-85f1-9ffabac16144)"
	Jan 16 04:21:52 ingress-addon-legacy-865845 kubelet[1630]: E0116 04:21:52.317494    1630 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ft2kb.17aab915d3d3d6f2", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ft2kb", UID:"bb6e4811-2b16-44b8-838c-474f693dbe6f", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-865845"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1619ef812a016f2, ext:214614196808, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1619ef812a016f2, ext:214614196808, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ft2kb.17aab915d3d3d6f2" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 04:21:52 ingress-addon-legacy-865845 kubelet[1630]: E0116 04:21:52.346974    1630 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ft2kb.17aab915d3d3d6f2", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ft2kb", UID:"bb6e4811-2b16-44b8-838c-474f693dbe6f", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-865845"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1619ef812a016f2, ext:214614196808, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1619ef813ce146d, ext:214633988027, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ft2kb.17aab915d3d3d6f2" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 16 04:21:55 ingress-addon-legacy-865845 kubelet[1630]: W0116 04:21:55.074753    1630 pod_container_deletor.go:77] Container "9bb1f5e34887b3dadb43d06206eee5c20518dcc332bcc4c83aade46b75d70cb6" not found in pod's containers
	Jan 16 04:21:56 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:56.415809    1630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-pvr8s" (UniqueName: "kubernetes.io/secret/bb6e4811-2b16-44b8-838c-474f693dbe6f-ingress-nginx-token-pvr8s") pod "bb6e4811-2b16-44b8-838c-474f693dbe6f" (UID: "bb6e4811-2b16-44b8-838c-474f693dbe6f")
	Jan 16 04:21:56 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:56.415867    1630 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bb6e4811-2b16-44b8-838c-474f693dbe6f-webhook-cert") pod "bb6e4811-2b16-44b8-838c-474f693dbe6f" (UID: "bb6e4811-2b16-44b8-838c-474f693dbe6f")
	Jan 16 04:21:56 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:56.422148    1630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6e4811-2b16-44b8-838c-474f693dbe6f-ingress-nginx-token-pvr8s" (OuterVolumeSpecName: "ingress-nginx-token-pvr8s") pod "bb6e4811-2b16-44b8-838c-474f693dbe6f" (UID: "bb6e4811-2b16-44b8-838c-474f693dbe6f"). InnerVolumeSpecName "ingress-nginx-token-pvr8s". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 04:21:56 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:56.424181    1630 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb6e4811-2b16-44b8-838c-474f693dbe6f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "bb6e4811-2b16-44b8-838c-474f693dbe6f" (UID: "bb6e4811-2b16-44b8-838c-474f693dbe6f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 16 04:21:56 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:56.516208    1630 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/bb6e4811-2b16-44b8-838c-474f693dbe6f-webhook-cert") on node "ingress-addon-legacy-865845" DevicePath ""
	Jan 16 04:21:56 ingress-addon-legacy-865845 kubelet[1630]: I0116 04:21:56.516271    1630 reconciler.go:319] Volume detached for volume "ingress-nginx-token-pvr8s" (UniqueName: "kubernetes.io/secret/bb6e4811-2b16-44b8-838c-474f693dbe6f-ingress-nginx-token-pvr8s") on node "ingress-addon-legacy-865845" DevicePath ""
	
	
	==> storage-provisioner [eae44963f85d80e62c63fac15fb5dd8027d3497a9af3bce145f78461d0d28d4d] <==
	I0116 04:18:48.500103       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 04:18:48.512949       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 04:18:48.513035       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 04:18:48.521107       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 04:18:48.522039       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e3c37f91-c51c-4b22-b493-777fd8afd0e4", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-865845_59600fab-6e3f-48d7-8450-6d6dfbfe4978 became leader
	I0116 04:18:48.523355       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-865845_59600fab-6e3f-48d7-8450-6d6dfbfe4978!
	I0116 04:18:48.624356       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-865845_59600fab-6e3f-48d7-8450-6d6dfbfe4978!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-865845 -n ingress-addon-legacy-865845
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-865845 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (177.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-v42wl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-v42wl -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-v42wl -- sh -c "ping -c 1 192.168.58.1": exit status 1 (231.788332ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-v42wl): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-x6w9z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-x6w9z -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-x6w9z -- sh -c "ping -c 1 192.168.58.1": exit status 1 (229.820907ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-x6w9z): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-701570
helpers_test.go:235: (dbg) docker inspect multinode-701570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0",
	        "Created": "2024-01-16T04:27:55.486134931Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2485245,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-16T04:27:55.799596489Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0/hostname",
	        "HostsPath": "/var/lib/docker/containers/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0/hosts",
	        "LogPath": "/var/lib/docker/containers/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0-json.log",
	        "Name": "/multinode-701570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-701570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-701570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2c00fd37155b8b00e2fb0e6d1f09d7019f8f42175bc3f47af1cb0c4210a50899-init/diff:/var/lib/docker/overlay2/4fdef913b89fa4836b2db5064ca9b972974c59582e71c63616575ab943b0844e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c00fd37155b8b00e2fb0e6d1f09d7019f8f42175bc3f47af1cb0c4210a50899/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c00fd37155b8b00e2fb0e6d1f09d7019f8f42175bc3f47af1cb0c4210a50899/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c00fd37155b8b00e2fb0e6d1f09d7019f8f42175bc3f47af1cb0c4210a50899/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-701570",
	                "Source": "/var/lib/docker/volumes/multinode-701570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-701570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-701570",
	                "name.minikube.sigs.k8s.io": "multinode-701570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f052ee7999fd3d88e8fe5c168f758a0cc435041a039a2fa6a40f4bc2cefdcca9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35391"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35390"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35387"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35389"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35388"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f052ee7999fd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-701570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "28e792c4e9c3",
	                        "multinode-701570"
	                    ],
	                    "NetworkID": "433a7a0b5634ac9fc48d0261eb3796fd7b5977045e8079bafca5af9955c9b53d",
	                    "EndpointID": "fff43a389c366e6df6644007768d901bad401db6aa2b68cf63aa5fe15e8cbcfb",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-701570 -n multinode-701570
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-701570 logs -n 25: (1.581590371s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-598941                           | mount-start-2-598941 | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:27 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-598941 ssh -- ls                    | mount-start-2-598941 | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:27 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-597136                           | mount-start-1-597136 | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:27 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-598941 ssh -- ls                    | mount-start-2-598941 | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:27 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-598941                           | mount-start-2-598941 | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:27 UTC |
	| start   | -p mount-start-2-598941                           | mount-start-2-598941 | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:27 UTC |
	| ssh     | mount-start-2-598941 ssh -- ls                    | mount-start-2-598941 | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:27 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-598941                           | mount-start-2-598941 | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:27 UTC |
	| delete  | -p mount-start-1-597136                           | mount-start-1-597136 | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:27 UTC |
	| start   | -p multinode-701570                               | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:27 UTC | 16 Jan 24 04:29 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- apply -f                   | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- rollout                    | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- get pods -o                | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- get pods -o                | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | busybox-5bc68d56bd-v42wl --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | busybox-5bc68d56bd-x6w9z --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | busybox-5bc68d56bd-v42wl --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | busybox-5bc68d56bd-x6w9z --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | busybox-5bc68d56bd-v42wl -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | busybox-5bc68d56bd-x6w9z -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- get pods -o                | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | busybox-5bc68d56bd-v42wl                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC |                     |
	|         | busybox-5bc68d56bd-v42wl -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC | 16 Jan 24 04:29 UTC |
	|         | busybox-5bc68d56bd-x6w9z                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-701570 -- exec                       | multinode-701570     | jenkins | v1.32.0 | 16 Jan 24 04:29 UTC |                     |
	|         | busybox-5bc68d56bd-x6w9z -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 04:27:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 04:27:50.143645 2484801 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:27:50.143906 2484801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:27:50.143935 2484801 out.go:309] Setting ErrFile to fd 2...
	I0116 04:27:50.143955 2484801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:27:50.144246 2484801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:27:50.144742 2484801 out.go:303] Setting JSON to false
	I0116 04:27:50.145700 2484801 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40201,"bootTime":1705339069,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0116 04:27:50.145803 2484801 start.go:138] virtualization:  
	I0116 04:27:50.149844 2484801 out.go:177] * [multinode-701570] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 04:27:50.152172 2484801 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 04:27:50.152351 2484801 notify.go:220] Checking for updates...
	I0116 04:27:50.155811 2484801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:27:50.157833 2484801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:27:50.159859 2484801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	I0116 04:27:50.161819 2484801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 04:27:50.163747 2484801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:27:50.166050 2484801 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:27:50.190364 2484801 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:27:50.190490 2484801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:27:50.275360 2484801 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-16 04:27:50.265345559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:27:50.275461 2484801 docker.go:295] overlay module found
	I0116 04:27:50.277761 2484801 out.go:177] * Using the docker driver based on user configuration
	I0116 04:27:50.279877 2484801 start.go:298] selected driver: docker
	I0116 04:27:50.279899 2484801 start.go:902] validating driver "docker" against <nil>
	I0116 04:27:50.279926 2484801 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:27:50.280664 2484801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:27:50.345427 2484801 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-16 04:27:50.33617307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:27:50.345596 2484801 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 04:27:50.345910 2484801 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 04:27:50.347777 2484801 out.go:177] * Using Docker driver with root privileges
	I0116 04:27:50.349506 2484801 cni.go:84] Creating CNI manager for ""
	I0116 04:27:50.349526 2484801 cni.go:136] 0 nodes found, recommending kindnet
	I0116 04:27:50.349537 2484801 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 04:27:50.349550 2484801 start_flags.go:321] config:
	{Name:multinode-701570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-701570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:27:50.352040 2484801 out.go:177] * Starting control plane node multinode-701570 in cluster multinode-701570
	I0116 04:27:50.353605 2484801 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 04:27:50.355668 2484801 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 04:27:50.357572 2484801 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:27:50.357620 2484801 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0116 04:27:50.357638 2484801 cache.go:56] Caching tarball of preloaded images
	I0116 04:27:50.357661 2484801 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 04:27:50.357717 2484801 preload.go:174] Found /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0116 04:27:50.357727 2484801 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 04:27:50.358075 2484801 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/config.json ...
	I0116 04:27:50.358105 2484801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/config.json: {Name:mk64029ec8f425ad23d20ef5989490a0ef81b843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:27:50.374856 2484801 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 04:27:50.374931 2484801 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 04:27:50.374954 2484801 cache.go:194] Successfully downloaded all kic artifacts
	I0116 04:27:50.375023 2484801 start.go:365] acquiring machines lock for multinode-701570: {Name:mk39ef4f6cc927598a13006064084af466452eca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:27:50.375143 2484801 start.go:369] acquired machines lock for "multinode-701570" in 95.431µs
	I0116 04:27:50.375169 2484801 start.go:93] Provisioning new machine with config: &{Name:multinode-701570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-701570 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 04:27:50.375257 2484801 start.go:125] createHost starting for "" (driver="docker")
	I0116 04:27:50.377522 2484801 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0116 04:27:50.377757 2484801 start.go:159] libmachine.API.Create for "multinode-701570" (driver="docker")
	I0116 04:27:50.377789 2484801 client.go:168] LocalClient.Create starting
	I0116 04:27:50.377870 2484801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem
	I0116 04:27:50.377909 2484801 main.go:141] libmachine: Decoding PEM data...
	I0116 04:27:50.377925 2484801 main.go:141] libmachine: Parsing certificate...
	I0116 04:27:50.377980 2484801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem
	I0116 04:27:50.378000 2484801 main.go:141] libmachine: Decoding PEM data...
	I0116 04:27:50.378010 2484801 main.go:141] libmachine: Parsing certificate...
	I0116 04:27:50.378354 2484801 cli_runner.go:164] Run: docker network inspect multinode-701570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0116 04:27:50.395275 2484801 cli_runner.go:211] docker network inspect multinode-701570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0116 04:27:50.395373 2484801 network_create.go:281] running [docker network inspect multinode-701570] to gather additional debugging logs...
	I0116 04:27:50.395396 2484801 cli_runner.go:164] Run: docker network inspect multinode-701570
	W0116 04:27:50.411995 2484801 cli_runner.go:211] docker network inspect multinode-701570 returned with exit code 1
	I0116 04:27:50.412040 2484801 network_create.go:284] error running [docker network inspect multinode-701570]: docker network inspect multinode-701570: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-701570 not found
	I0116 04:27:50.412052 2484801 network_create.go:286] output of [docker network inspect multinode-701570]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-701570 not found
	
	** /stderr **
	I0116 04:27:50.412143 2484801 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 04:27:50.431109 2484801 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e2c29f743d68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:1b:d5:d5:19} reservation:<nil>}
	I0116 04:27:50.431456 2484801 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024cb6a0}
	I0116 04:27:50.431479 2484801 network_create.go:124] attempt to create docker network multinode-701570 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0116 04:27:50.431547 2484801 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-701570 multinode-701570
	I0116 04:27:50.499254 2484801 network_create.go:108] docker network multinode-701570 192.168.58.0/24 created
	I0116 04:27:50.499287 2484801 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-701570" container
	I0116 04:27:50.499366 2484801 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 04:27:50.518362 2484801 cli_runner.go:164] Run: docker volume create multinode-701570 --label name.minikube.sigs.k8s.io=multinode-701570 --label created_by.minikube.sigs.k8s.io=true
	I0116 04:27:50.537213 2484801 oci.go:103] Successfully created a docker volume multinode-701570
	I0116 04:27:50.537301 2484801 cli_runner.go:164] Run: docker run --rm --name multinode-701570-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-701570 --entrypoint /usr/bin/test -v multinode-701570:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 04:27:51.086037 2484801 oci.go:107] Successfully prepared a docker volume multinode-701570
	I0116 04:27:51.086104 2484801 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:27:51.086125 2484801 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 04:27:51.086209 2484801 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-701570:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 04:27:55.405450 2484801 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-701570:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.319197717s)
	I0116 04:27:55.405485 2484801 kic.go:203] duration metric: took 4.319357 seconds to extract preloaded images to volume
	W0116 04:27:55.405636 2484801 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 04:27:55.405740 2484801 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 04:27:55.469882 2484801 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-701570 --name multinode-701570 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-701570 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-701570 --network multinode-701570 --ip 192.168.58.2 --volume multinode-701570:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 04:27:55.807809 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570 --format={{.State.Running}}
	I0116 04:27:55.830350 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570 --format={{.State.Status}}
	I0116 04:27:55.850038 2484801 cli_runner.go:164] Run: docker exec multinode-701570 stat /var/lib/dpkg/alternatives/iptables
	I0116 04:27:55.924927 2484801 oci.go:144] the created container "multinode-701570" has a running status.
	I0116 04:27:55.924962 2484801 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa...
	I0116 04:27:56.415837 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 04:27:56.415924 2484801 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 04:27:56.445889 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570 --format={{.State.Status}}
	I0116 04:27:56.485165 2484801 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 04:27:56.485189 2484801 kic_runner.go:114] Args: [docker exec --privileged multinode-701570 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 04:27:56.573664 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570 --format={{.State.Status}}
	I0116 04:27:56.599745 2484801 machine.go:88] provisioning docker machine ...
	I0116 04:27:56.599773 2484801 ubuntu.go:169] provisioning hostname "multinode-701570"
	I0116 04:27:56.599837 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:27:56.643032 2484801 main.go:141] libmachine: Using SSH client type: native
	I0116 04:27:56.643579 2484801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35391 <nil> <nil>}
	I0116 04:27:56.643597 2484801 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-701570 && echo "multinode-701570" | sudo tee /etc/hostname
	I0116 04:27:56.852277 2484801 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-701570
	
	I0116 04:27:56.852439 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:27:56.879458 2484801 main.go:141] libmachine: Using SSH client type: native
	I0116 04:27:56.879844 2484801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35391 <nil> <nil>}
	I0116 04:27:56.879860 2484801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-701570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-701570/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-701570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 04:27:57.026688 2484801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 04:27:57.026719 2484801 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17965-2415678/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-2415678/.minikube}
	I0116 04:27:57.026749 2484801 ubuntu.go:177] setting up certificates
	I0116 04:27:57.026763 2484801 provision.go:83] configureAuth start
	I0116 04:27:57.026830 2484801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701570
	I0116 04:27:57.044996 2484801 provision.go:138] copyHostCerts
	I0116 04:27:57.045041 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem
	I0116 04:27:57.045075 2484801 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem, removing ...
	I0116 04:27:57.045088 2484801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem
	I0116 04:27:57.045167 2484801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem (1078 bytes)
	I0116 04:27:57.045250 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem
	I0116 04:27:57.045273 2484801 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem, removing ...
	I0116 04:27:57.045281 2484801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem
	I0116 04:27:57.045307 2484801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem (1123 bytes)
	I0116 04:27:57.045351 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem
	I0116 04:27:57.045376 2484801 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem, removing ...
	I0116 04:27:57.045384 2484801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem
	I0116 04:27:57.045408 2484801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem (1679 bytes)
	I0116 04:27:57.045455 2484801 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem org=jenkins.multinode-701570 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-701570]
	I0116 04:27:57.550875 2484801 provision.go:172] copyRemoteCerts
	I0116 04:27:57.550969 2484801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 04:27:57.551012 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:27:57.568629 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35391 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa Username:docker}
	I0116 04:27:57.667124 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 04:27:57.667186 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 04:27:57.694965 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 04:27:57.695027 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 04:27:57.723416 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 04:27:57.723477 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 04:27:57.751545 2484801 provision.go:86] duration metric: configureAuth took 724.760804ms
	I0116 04:27:57.751573 2484801 ubuntu.go:193] setting minikube options for container-runtime
	I0116 04:27:57.751766 2484801 config.go:182] Loaded profile config "multinode-701570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:27:57.751874 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:27:57.770153 2484801 main.go:141] libmachine: Using SSH client type: native
	I0116 04:27:57.770572 2484801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35391 <nil> <nil>}
	I0116 04:27:57.770595 2484801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 04:27:58.024964 2484801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 04:27:58.024990 2484801 machine.go:91] provisioned docker machine in 1.425226192s
	I0116 04:27:58.025000 2484801 client.go:171] LocalClient.Create took 7.647205767s
	I0116 04:27:58.025021 2484801 start.go:167] duration metric: libmachine.API.Create for "multinode-701570" took 7.647266031s
	I0116 04:27:58.025028 2484801 start.go:300] post-start starting for "multinode-701570" (driver="docker")
	I0116 04:27:58.025039 2484801 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 04:27:58.025103 2484801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 04:27:58.025147 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:27:58.044333 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35391 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa Username:docker}
	I0116 04:27:58.143590 2484801 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 04:27:58.147454 2484801 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0116 04:27:58.147474 2484801 command_runner.go:130] > NAME="Ubuntu"
	I0116 04:27:58.147482 2484801 command_runner.go:130] > VERSION_ID="22.04"
	I0116 04:27:58.147489 2484801 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0116 04:27:58.147495 2484801 command_runner.go:130] > VERSION_CODENAME=jammy
	I0116 04:27:58.147503 2484801 command_runner.go:130] > ID=ubuntu
	I0116 04:27:58.147512 2484801 command_runner.go:130] > ID_LIKE=debian
	I0116 04:27:58.147518 2484801 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0116 04:27:58.147528 2484801 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0116 04:27:58.147536 2484801 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0116 04:27:58.147544 2484801 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0116 04:27:58.147550 2484801 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0116 04:27:58.147599 2484801 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 04:27:58.147630 2484801 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 04:27:58.147644 2484801 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 04:27:58.147652 2484801 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 04:27:58.147663 2484801 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-2415678/.minikube/addons for local assets ...
	I0116 04:27:58.147719 2484801 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-2415678/.minikube/files for local assets ...
	I0116 04:27:58.147805 2484801 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem -> 24210052.pem in /etc/ssl/certs
	I0116 04:27:58.147816 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem -> /etc/ssl/certs/24210052.pem
	I0116 04:27:58.147915 2484801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 04:27:58.158183 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem --> /etc/ssl/certs/24210052.pem (1708 bytes)
	I0116 04:27:58.186078 2484801 start.go:303] post-start completed in 161.035428ms
	I0116 04:27:58.186435 2484801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701570
	I0116 04:27:58.204427 2484801 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/config.json ...
	I0116 04:27:58.204719 2484801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 04:27:58.204861 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:27:58.221628 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35391 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa Username:docker}
	I0116 04:27:58.314009 2484801 command_runner.go:130] > 12%!
	(MISSING)I0116 04:27:58.314512 2484801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 04:27:58.319898 2484801 command_runner.go:130] > 172G
	I0116 04:27:58.319935 2484801 start.go:128] duration metric: createHost completed in 7.944668641s
	I0116 04:27:58.319946 2484801 start.go:83] releasing machines lock for "multinode-701570", held for 7.944794537s
	I0116 04:27:58.320033 2484801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701570
	I0116 04:27:58.337035 2484801 ssh_runner.go:195] Run: cat /version.json
	I0116 04:27:58.337069 2484801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 04:27:58.337100 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:27:58.337131 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:27:58.360915 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35391 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa Username:docker}
	I0116 04:27:58.366930 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35391 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa Username:docker}
	I0116 04:27:58.456981 2484801 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1704759386-17866", "minikube_version": "v1.32.0", "commit": "3c45a4d018cdc90b01d9bcb479fb293aad58ed8f"}
	I0116 04:27:58.457108 2484801 ssh_runner.go:195] Run: systemctl --version
	I0116 04:27:58.591407 2484801 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 04:27:58.594567 2484801 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I0116 04:27:58.594596 2484801 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0116 04:27:58.594655 2484801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 04:27:58.741412 2484801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 04:27:58.746479 2484801 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0116 04:27:58.746506 2484801 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0116 04:27:58.746515 2484801 command_runner.go:130] > Device: 3ah/58d	Inode: 1823289     Links: 1
	I0116 04:27:58.746523 2484801 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 04:27:58.746530 2484801 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0116 04:27:58.746536 2484801 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0116 04:27:58.746543 2484801 command_runner.go:130] > Change: 2024-01-16 04:06:01.188365486 +0000
	I0116 04:27:58.746550 2484801 command_runner.go:130] >  Birth: 2024-01-16 04:06:01.188365486 +0000
	I0116 04:27:58.746816 2484801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:27:58.770049 2484801 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 04:27:58.770135 2484801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:27:58.806375 2484801 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0116 04:27:58.806422 2484801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 04:27:58.806430 2484801 start.go:475] detecting cgroup driver to use...
	I0116 04:27:58.806463 2484801 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 04:27:58.806519 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 04:27:58.825783 2484801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 04:27:58.839480 2484801 docker.go:217] disabling cri-docker service (if available) ...
	I0116 04:27:58.839592 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 04:27:58.855787 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 04:27:58.877151 2484801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 04:27:58.969871 2484801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 04:27:59.070813 2484801 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 04:27:59.070841 2484801 docker.go:233] disabling docker service ...
	I0116 04:27:59.070922 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 04:27:59.093091 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 04:27:59.107044 2484801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 04:27:59.196847 2484801 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 04:27:59.197237 2484801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 04:27:59.294659 2484801 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 04:27:59.294799 2484801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 04:27:59.308359 2484801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 04:27:59.326300 2484801 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 04:27:59.327463 2484801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 04:27:59.327540 2484801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:27:59.339503 2484801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 04:27:59.339603 2484801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:27:59.351503 2484801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:27:59.363428 2484801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:27:59.374866 2484801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 04:27:59.386299 2484801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 04:27:59.395272 2484801 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 04:27:59.396391 2484801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 04:27:59.406447 2484801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 04:27:59.500484 2484801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 04:27:59.615845 2484801 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 04:27:59.615945 2484801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 04:27:59.620480 2484801 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 04:27:59.620504 2484801 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 04:27:59.620512 2484801 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0116 04:27:59.620521 2484801 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 04:27:59.620545 2484801 command_runner.go:130] > Access: 2024-01-16 04:27:59.598903656 +0000
	I0116 04:27:59.620567 2484801 command_runner.go:130] > Modify: 2024-01-16 04:27:59.598903656 +0000
	I0116 04:27:59.620575 2484801 command_runner.go:130] > Change: 2024-01-16 04:27:59.598903656 +0000
	I0116 04:27:59.620586 2484801 command_runner.go:130] >  Birth: -
	I0116 04:27:59.620603 2484801 start.go:543] Will wait 60s for crictl version
	I0116 04:27:59.620659 2484801 ssh_runner.go:195] Run: which crictl
	I0116 04:27:59.624771 2484801 command_runner.go:130] > /usr/bin/crictl
	I0116 04:27:59.624861 2484801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 04:27:59.669232 2484801 command_runner.go:130] > Version:  0.1.0
	I0116 04:27:59.669290 2484801 command_runner.go:130] > RuntimeName:  cri-o
	I0116 04:27:59.669318 2484801 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0116 04:27:59.669503 2484801 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 04:27:59.671972 2484801 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 04:27:59.672101 2484801 ssh_runner.go:195] Run: crio --version
	I0116 04:27:59.711580 2484801 command_runner.go:130] > crio version 1.24.6
	I0116 04:27:59.711605 2484801 command_runner.go:130] > Version:          1.24.6
	I0116 04:27:59.711615 2484801 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 04:27:59.711621 2484801 command_runner.go:130] > GitTreeState:     clean
	I0116 04:27:59.711628 2484801 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 04:27:59.711668 2484801 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 04:27:59.711677 2484801 command_runner.go:130] > Compiler:         gc
	I0116 04:27:59.711683 2484801 command_runner.go:130] > Platform:         linux/arm64
	I0116 04:27:59.711704 2484801 command_runner.go:130] > Linkmode:         dynamic
	I0116 04:27:59.711727 2484801 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 04:27:59.711747 2484801 command_runner.go:130] > SeccompEnabled:   true
	I0116 04:27:59.711758 2484801 command_runner.go:130] > AppArmorEnabled:  false
	I0116 04:27:59.714477 2484801 ssh_runner.go:195] Run: crio --version
	I0116 04:27:59.754097 2484801 command_runner.go:130] > crio version 1.24.6
	I0116 04:27:59.754131 2484801 command_runner.go:130] > Version:          1.24.6
	I0116 04:27:59.754145 2484801 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 04:27:59.754151 2484801 command_runner.go:130] > GitTreeState:     clean
	I0116 04:27:59.754166 2484801 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 04:27:59.754205 2484801 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 04:27:59.754224 2484801 command_runner.go:130] > Compiler:         gc
	I0116 04:27:59.754231 2484801 command_runner.go:130] > Platform:         linux/arm64
	I0116 04:27:59.754240 2484801 command_runner.go:130] > Linkmode:         dynamic
	I0116 04:27:59.754249 2484801 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 04:27:59.754255 2484801 command_runner.go:130] > SeccompEnabled:   true
	I0116 04:27:59.754260 2484801 command_runner.go:130] > AppArmorEnabled:  false
	I0116 04:27:59.759782 2484801 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0116 04:27:59.761693 2484801 cli_runner.go:164] Run: docker network inspect multinode-701570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 04:27:59.778750 2484801 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0116 04:27:59.783232 2484801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:27:59.796523 2484801 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:27:59.796595 2484801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:27:59.863543 2484801 command_runner.go:130] > {
	I0116 04:27:59.863566 2484801 command_runner.go:130] >   "images": [
	I0116 04:27:59.863574 2484801 command_runner.go:130] >     {
	I0116 04:27:59.863585 2484801 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0116 04:27:59.863591 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.863598 2484801 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 04:27:59.863603 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.863608 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.863621 2484801 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 04:27:59.863631 2484801 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0116 04:27:59.863640 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.863646 2484801 command_runner.go:130] >       "size": "60867618",
	I0116 04:27:59.863655 2484801 command_runner.go:130] >       "uid": null,
	I0116 04:27:59.863661 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.863678 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.863686 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.863691 2484801 command_runner.go:130] >     },
	I0116 04:27:59.863696 2484801 command_runner.go:130] >     {
	I0116 04:27:59.863705 2484801 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0116 04:27:59.863710 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.863721 2484801 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 04:27:59.863730 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.863735 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.863749 2484801 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0116 04:27:59.863762 2484801 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0116 04:27:59.863770 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.863779 2484801 command_runner.go:130] >       "size": "29037500",
	I0116 04:27:59.863786 2484801 command_runner.go:130] >       "uid": null,
	I0116 04:27:59.863791 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.863796 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.863803 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.863812 2484801 command_runner.go:130] >     },
	I0116 04:27:59.863816 2484801 command_runner.go:130] >     {
	I0116 04:27:59.863824 2484801 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0116 04:27:59.863832 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.863839 2484801 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 04:27:59.863847 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.863852 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.863867 2484801 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0116 04:27:59.863876 2484801 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0116 04:27:59.863884 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.863890 2484801 command_runner.go:130] >       "size": "51393451",
	I0116 04:27:59.863898 2484801 command_runner.go:130] >       "uid": null,
	I0116 04:27:59.863903 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.863911 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.863917 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.863924 2484801 command_runner.go:130] >     },
	I0116 04:27:59.863929 2484801 command_runner.go:130] >     {
	I0116 04:27:59.863940 2484801 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0116 04:27:59.863945 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.863954 2484801 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 04:27:59.863959 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.863964 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.863976 2484801 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0116 04:27:59.863988 2484801 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0116 04:27:59.863997 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864008 2484801 command_runner.go:130] >       "size": "182203183",
	I0116 04:27:59.864017 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.864022 2484801 command_runner.go:130] >         "value": "0"
	I0116 04:27:59.864028 2484801 command_runner.go:130] >       },
	I0116 04:27:59.864035 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.864041 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.864046 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.864054 2484801 command_runner.go:130] >     },
	I0116 04:27:59.864059 2484801 command_runner.go:130] >     {
	I0116 04:27:59.864067 2484801 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0116 04:27:59.864075 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.864082 2484801 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 04:27:59.864090 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864095 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.864108 2484801 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0116 04:27:59.864118 2484801 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0116 04:27:59.864126 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864131 2484801 command_runner.go:130] >       "size": "121119694",
	I0116 04:27:59.864138 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.864147 2484801 command_runner.go:130] >         "value": "0"
	I0116 04:27:59.864152 2484801 command_runner.go:130] >       },
	I0116 04:27:59.864161 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.864166 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.864174 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.864179 2484801 command_runner.go:130] >     },
	I0116 04:27:59.864187 2484801 command_runner.go:130] >     {
	I0116 04:27:59.864195 2484801 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0116 04:27:59.864203 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.864210 2484801 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 04:27:59.864216 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864222 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.864235 2484801 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 04:27:59.864249 2484801 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0116 04:27:59.864256 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864262 2484801 command_runner.go:130] >       "size": "117252916",
	I0116 04:27:59.864270 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.864278 2484801 command_runner.go:130] >         "value": "0"
	I0116 04:27:59.864285 2484801 command_runner.go:130] >       },
	I0116 04:27:59.864290 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.864298 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.864307 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.864311 2484801 command_runner.go:130] >     },
	I0116 04:27:59.864315 2484801 command_runner.go:130] >     {
	I0116 04:27:59.864327 2484801 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0116 04:27:59.864335 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.864341 2484801 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 04:27:59.864349 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864354 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.864363 2484801 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0116 04:27:59.864376 2484801 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 04:27:59.864382 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864389 2484801 command_runner.go:130] >       "size": "69992343",
	I0116 04:27:59.864398 2484801 command_runner.go:130] >       "uid": null,
	I0116 04:27:59.864403 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.864413 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.864423 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.864427 2484801 command_runner.go:130] >     },
	I0116 04:27:59.864435 2484801 command_runner.go:130] >     {
	I0116 04:27:59.864443 2484801 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0116 04:27:59.864450 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.864457 2484801 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 04:27:59.864464 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864470 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.864494 2484801 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 04:27:59.864507 2484801 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0116 04:27:59.864515 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864521 2484801 command_runner.go:130] >       "size": "59253556",
	I0116 04:27:59.864526 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.864533 2484801 command_runner.go:130] >         "value": "0"
	I0116 04:27:59.864538 2484801 command_runner.go:130] >       },
	I0116 04:27:59.864543 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.864550 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.864564 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.864573 2484801 command_runner.go:130] >     },
	I0116 04:27:59.864578 2484801 command_runner.go:130] >     {
	I0116 04:27:59.864588 2484801 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0116 04:27:59.864597 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.864602 2484801 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 04:27:59.864611 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864616 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.864625 2484801 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0116 04:27:59.864638 2484801 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0116 04:27:59.864645 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.864651 2484801 command_runner.go:130] >       "size": "520014",
	I0116 04:27:59.864659 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.864664 2484801 command_runner.go:130] >         "value": "65535"
	I0116 04:27:59.864672 2484801 command_runner.go:130] >       },
	I0116 04:27:59.864677 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.864685 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.864691 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.864699 2484801 command_runner.go:130] >     }
	I0116 04:27:59.864706 2484801 command_runner.go:130] >   ]
	I0116 04:27:59.864711 2484801 command_runner.go:130] > }
	I0116 04:27:59.867163 2484801 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 04:27:59.867186 2484801 crio.go:415] Images already preloaded, skipping extraction
	I0116 04:27:59.867263 2484801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0116 04:27:59.907208 2484801 command_runner.go:130] > {
	I0116 04:27:59.907245 2484801 command_runner.go:130] >   "images": [
	I0116 04:27:59.907252 2484801 command_runner.go:130] >     {
	I0116 04:27:59.907262 2484801 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0116 04:27:59.907270 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.907284 2484801 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0116 04:27:59.907290 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907295 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.907307 2484801 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0116 04:27:59.907321 2484801 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0116 04:27:59.907326 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907334 2484801 command_runner.go:130] >       "size": "60867618",
	I0116 04:27:59.907343 2484801 command_runner.go:130] >       "uid": null,
	I0116 04:27:59.907348 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.907354 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.907366 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.907374 2484801 command_runner.go:130] >     },
	I0116 04:27:59.907378 2484801 command_runner.go:130] >     {
	I0116 04:27:59.907386 2484801 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0116 04:27:59.907393 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.907400 2484801 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0116 04:27:59.907407 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907412 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.907422 2484801 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0116 04:27:59.907437 2484801 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0116 04:27:59.907442 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907450 2484801 command_runner.go:130] >       "size": "29037500",
	I0116 04:27:59.907455 2484801 command_runner.go:130] >       "uid": null,
	I0116 04:27:59.907460 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.907464 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.907469 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.907473 2484801 command_runner.go:130] >     },
	I0116 04:27:59.907477 2484801 command_runner.go:130] >     {
	I0116 04:27:59.907487 2484801 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0116 04:27:59.907499 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.907505 2484801 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0116 04:27:59.907510 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907517 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.907527 2484801 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0116 04:27:59.907539 2484801 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0116 04:27:59.907544 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907552 2484801 command_runner.go:130] >       "size": "51393451",
	I0116 04:27:59.907558 2484801 command_runner.go:130] >       "uid": null,
	I0116 04:27:59.907567 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.907572 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.907577 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.907581 2484801 command_runner.go:130] >     },
	I0116 04:27:59.907589 2484801 command_runner.go:130] >     {
	I0116 04:27:59.907597 2484801 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0116 04:27:59.907605 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.907612 2484801 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0116 04:27:59.907622 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907631 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.907640 2484801 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0116 04:27:59.907652 2484801 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0116 04:27:59.907662 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907671 2484801 command_runner.go:130] >       "size": "182203183",
	I0116 04:27:59.907680 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.907686 2484801 command_runner.go:130] >         "value": "0"
	I0116 04:27:59.907694 2484801 command_runner.go:130] >       },
	I0116 04:27:59.907699 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.907707 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.907712 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.907719 2484801 command_runner.go:130] >     },
	I0116 04:27:59.907724 2484801 command_runner.go:130] >     {
	I0116 04:27:59.907732 2484801 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I0116 04:27:59.907737 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.907743 2484801 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0116 04:27:59.907751 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907758 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.907771 2484801 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I0116 04:27:59.907783 2484801 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I0116 04:27:59.907791 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907796 2484801 command_runner.go:130] >       "size": "121119694",
	I0116 04:27:59.907804 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.907809 2484801 command_runner.go:130] >         "value": "0"
	I0116 04:27:59.907813 2484801 command_runner.go:130] >       },
	I0116 04:27:59.907818 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.907823 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.907830 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.907838 2484801 command_runner.go:130] >     },
	I0116 04:27:59.907842 2484801 command_runner.go:130] >     {
	I0116 04:27:59.907850 2484801 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I0116 04:27:59.907858 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.907865 2484801 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0116 04:27:59.907873 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907878 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.907893 2484801 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0116 04:27:59.907904 2484801 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I0116 04:27:59.907912 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.907917 2484801 command_runner.go:130] >       "size": "117252916",
	I0116 04:27:59.907925 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.907930 2484801 command_runner.go:130] >         "value": "0"
	I0116 04:27:59.907938 2484801 command_runner.go:130] >       },
	I0116 04:27:59.907943 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.907951 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.907957 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.907964 2484801 command_runner.go:130] >     },
	I0116 04:27:59.907968 2484801 command_runner.go:130] >     {
	I0116 04:27:59.907976 2484801 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I0116 04:27:59.907981 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.907989 2484801 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0116 04:27:59.907997 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.908002 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.908015 2484801 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I0116 04:27:59.908029 2484801 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0116 04:27:59.908040 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.908047 2484801 command_runner.go:130] >       "size": "69992343",
	I0116 04:27:59.908052 2484801 command_runner.go:130] >       "uid": null,
	I0116 04:27:59.908057 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.908063 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.908069 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.908076 2484801 command_runner.go:130] >     },
	I0116 04:27:59.908081 2484801 command_runner.go:130] >     {
	I0116 04:27:59.908088 2484801 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I0116 04:27:59.908097 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.908103 2484801 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0116 04:27:59.908111 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.908116 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.908138 2484801 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0116 04:27:59.908151 2484801 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I0116 04:27:59.908159 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.908165 2484801 command_runner.go:130] >       "size": "59253556",
	I0116 04:27:59.908175 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.908186 2484801 command_runner.go:130] >         "value": "0"
	I0116 04:27:59.908194 2484801 command_runner.go:130] >       },
	I0116 04:27:59.908200 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.908205 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.908212 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.908216 2484801 command_runner.go:130] >     },
	I0116 04:27:59.908221 2484801 command_runner.go:130] >     {
	I0116 04:27:59.908230 2484801 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0116 04:27:59.908238 2484801 command_runner.go:130] >       "repoTags": [
	I0116 04:27:59.908244 2484801 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0116 04:27:59.908252 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.908257 2484801 command_runner.go:130] >       "repoDigests": [
	I0116 04:27:59.908269 2484801 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0116 04:27:59.908281 2484801 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0116 04:27:59.908289 2484801 command_runner.go:130] >       ],
	I0116 04:27:59.908294 2484801 command_runner.go:130] >       "size": "520014",
	I0116 04:27:59.908299 2484801 command_runner.go:130] >       "uid": {
	I0116 04:27:59.908307 2484801 command_runner.go:130] >         "value": "65535"
	I0116 04:27:59.908315 2484801 command_runner.go:130] >       },
	I0116 04:27:59.908320 2484801 command_runner.go:130] >       "username": "",
	I0116 04:27:59.908329 2484801 command_runner.go:130] >       "spec": null,
	I0116 04:27:59.908334 2484801 command_runner.go:130] >       "pinned": false
	I0116 04:27:59.908341 2484801 command_runner.go:130] >     }
	I0116 04:27:59.908346 2484801 command_runner.go:130] >   ]
	I0116 04:27:59.908353 2484801 command_runner.go:130] > }
	I0116 04:27:59.910798 2484801 crio.go:496] all images are preloaded for cri-o runtime.
	I0116 04:27:59.910818 2484801 cache_images.go:84] Images are preloaded, skipping loading
	I0116 04:27:59.910890 2484801 ssh_runner.go:195] Run: crio config
	I0116 04:27:59.961480 2484801 command_runner.go:130] ! time="2024-01-16 04:27:59.961090197Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0116 04:27:59.962023 2484801 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 04:27:59.968742 2484801 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 04:27:59.968773 2484801 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 04:27:59.968782 2484801 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 04:27:59.968789 2484801 command_runner.go:130] > #
	I0116 04:27:59.968798 2484801 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 04:27:59.968806 2484801 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 04:27:59.968815 2484801 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 04:27:59.968828 2484801 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 04:27:59.968833 2484801 command_runner.go:130] > # reload'.
	I0116 04:27:59.968841 2484801 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 04:27:59.968857 2484801 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 04:27:59.968865 2484801 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 04:27:59.968876 2484801 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 04:27:59.968881 2484801 command_runner.go:130] > [crio]
	I0116 04:27:59.968890 2484801 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 04:27:59.968896 2484801 command_runner.go:130] > # containers images, in this directory.
	I0116 04:27:59.968906 2484801 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0116 04:27:59.968919 2484801 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 04:27:59.968925 2484801 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0116 04:27:59.968933 2484801 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 04:27:59.968944 2484801 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 04:27:59.968950 2484801 command_runner.go:130] > # storage_driver = "vfs"
	I0116 04:27:59.968959 2484801 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 04:27:59.968966 2484801 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 04:27:59.968971 2484801 command_runner.go:130] > # storage_option = [
	I0116 04:27:59.968976 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.968986 2484801 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 04:27:59.968993 2484801 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 04:27:59.969002 2484801 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 04:27:59.969009 2484801 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 04:27:59.969017 2484801 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 04:27:59.969026 2484801 command_runner.go:130] > # always happen on a node reboot
	I0116 04:27:59.969035 2484801 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 04:27:59.969045 2484801 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 04:27:59.969053 2484801 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 04:27:59.969064 2484801 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 04:27:59.969073 2484801 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 04:27:59.969083 2484801 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 04:27:59.969093 2484801 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 04:27:59.969101 2484801 command_runner.go:130] > # internal_wipe = true
	I0116 04:27:59.969109 2484801 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 04:27:59.969119 2484801 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 04:27:59.969126 2484801 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 04:27:59.969133 2484801 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 04:27:59.969143 2484801 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 04:27:59.969147 2484801 command_runner.go:130] > [crio.api]
	I0116 04:27:59.969156 2484801 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 04:27:59.969162 2484801 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 04:27:59.969171 2484801 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 04:27:59.969176 2484801 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 04:27:59.969186 2484801 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 04:27:59.969196 2484801 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 04:27:59.969201 2484801 command_runner.go:130] > # stream_port = "0"
	I0116 04:27:59.969209 2484801 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 04:27:59.969215 2484801 command_runner.go:130] > # stream_enable_tls = false
	I0116 04:27:59.969222 2484801 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 04:27:59.969230 2484801 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 04:27:59.969238 2484801 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 04:27:59.969246 2484801 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 04:27:59.969253 2484801 command_runner.go:130] > # minutes.
	I0116 04:27:59.969258 2484801 command_runner.go:130] > # stream_tls_cert = ""
	I0116 04:27:59.969266 2484801 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 04:27:59.969277 2484801 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 04:27:59.969282 2484801 command_runner.go:130] > # stream_tls_key = ""
	I0116 04:27:59.969290 2484801 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 04:27:59.969299 2484801 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 04:27:59.969309 2484801 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 04:27:59.969314 2484801 command_runner.go:130] > # stream_tls_ca = ""
	I0116 04:27:59.969328 2484801 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 04:27:59.969334 2484801 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0116 04:27:59.969343 2484801 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 04:27:59.969352 2484801 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0116 04:27:59.969371 2484801 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 04:27:59.969380 2484801 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 04:27:59.969385 2484801 command_runner.go:130] > [crio.runtime]
	I0116 04:27:59.969395 2484801 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 04:27:59.969402 2484801 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 04:27:59.969407 2484801 command_runner.go:130] > # "nofile=1024:2048"
	I0116 04:27:59.969419 2484801 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 04:27:59.969425 2484801 command_runner.go:130] > # default_ulimits = [
	I0116 04:27:59.969432 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.969439 2484801 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 04:27:59.969446 2484801 command_runner.go:130] > # no_pivot = false
	I0116 04:27:59.969453 2484801 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 04:27:59.969463 2484801 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 04:27:59.969472 2484801 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 04:27:59.969481 2484801 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 04:27:59.969490 2484801 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 04:27:59.969498 2484801 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 04:27:59.969505 2484801 command_runner.go:130] > # conmon = ""
	I0116 04:27:59.969511 2484801 command_runner.go:130] > # Cgroup setting for conmon
	I0116 04:27:59.969522 2484801 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 04:27:59.969530 2484801 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 04:27:59.969538 2484801 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 04:27:59.969544 2484801 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 04:27:59.969556 2484801 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 04:27:59.969564 2484801 command_runner.go:130] > # conmon_env = [
	I0116 04:27:59.969572 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.969578 2484801 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 04:27:59.969584 2484801 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 04:27:59.969594 2484801 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 04:27:59.969598 2484801 command_runner.go:130] > # default_env = [
	I0116 04:27:59.969603 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.969613 2484801 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 04:27:59.969620 2484801 command_runner.go:130] > # selinux = false
	I0116 04:27:59.969630 2484801 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 04:27:59.969640 2484801 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 04:27:59.969647 2484801 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 04:27:59.969652 2484801 command_runner.go:130] > # seccomp_profile = ""
	I0116 04:27:59.969661 2484801 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 04:27:59.969671 2484801 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 04:27:59.969679 2484801 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 04:27:59.969687 2484801 command_runner.go:130] > # which might increase security.
	I0116 04:27:59.969692 2484801 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0116 04:27:59.969700 2484801 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 04:27:59.969708 2484801 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 04:27:59.969719 2484801 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 04:27:59.969728 2484801 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 04:27:59.969737 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:27:59.969743 2484801 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 04:27:59.969752 2484801 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 04:27:59.969758 2484801 command_runner.go:130] > # the cgroup blockio controller.
	I0116 04:27:59.969764 2484801 command_runner.go:130] > # blockio_config_file = ""
	I0116 04:27:59.969776 2484801 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 04:27:59.969781 2484801 command_runner.go:130] > # irqbalance daemon.
	I0116 04:27:59.969788 2484801 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 04:27:59.969798 2484801 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 04:27:59.969807 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:27:59.969812 2484801 command_runner.go:130] > # rdt_config_file = ""
	I0116 04:27:59.969818 2484801 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 04:27:59.969826 2484801 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 04:27:59.969833 2484801 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 04:27:59.969840 2484801 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 04:27:59.969848 2484801 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 04:27:59.969858 2484801 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 04:27:59.969863 2484801 command_runner.go:130] > # will be added.
	I0116 04:27:59.969868 2484801 command_runner.go:130] > # default_capabilities = [
	I0116 04:27:59.969873 2484801 command_runner.go:130] > # 	"CHOWN",
	I0116 04:27:59.969880 2484801 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 04:27:59.969884 2484801 command_runner.go:130] > # 	"FSETID",
	I0116 04:27:59.969894 2484801 command_runner.go:130] > # 	"FOWNER",
	I0116 04:27:59.969898 2484801 command_runner.go:130] > # 	"SETGID",
	I0116 04:27:59.969903 2484801 command_runner.go:130] > # 	"SETUID",
	I0116 04:27:59.969908 2484801 command_runner.go:130] > # 	"SETPCAP",
	I0116 04:27:59.969915 2484801 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 04:27:59.969920 2484801 command_runner.go:130] > # 	"KILL",
	I0116 04:27:59.969924 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.969936 2484801 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0116 04:27:59.969943 2484801 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0116 04:27:59.969950 2484801 command_runner.go:130] > # add_inheritable_capabilities = true
	I0116 04:27:59.969960 2484801 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 04:27:59.969969 2484801 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 04:27:59.969983 2484801 command_runner.go:130] > # default_sysctls = [
	I0116 04:27:59.969987 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.969992 2484801 command_runner.go:130] > # List of devices on the host that a
	I0116 04:27:59.970002 2484801 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 04:27:59.970008 2484801 command_runner.go:130] > # allowed_devices = [
	I0116 04:27:59.970012 2484801 command_runner.go:130] > # 	"/dev/fuse",
	I0116 04:27:59.970021 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.970031 2484801 command_runner.go:130] > # List of additional devices. specified as
	I0116 04:27:59.970056 2484801 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 04:27:59.970067 2484801 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 04:27:59.970075 2484801 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 04:27:59.970080 2484801 command_runner.go:130] > # additional_devices = [
	I0116 04:27:59.970087 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.970094 2484801 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 04:27:59.970099 2484801 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 04:27:59.970107 2484801 command_runner.go:130] > # 	"/etc/cdi",
	I0116 04:27:59.970112 2484801 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 04:27:59.970118 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.970125 2484801 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 04:27:59.970135 2484801 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 04:27:59.970140 2484801 command_runner.go:130] > # Defaults to false.
	I0116 04:27:59.970146 2484801 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 04:27:59.970156 2484801 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 04:27:59.970166 2484801 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 04:27:59.970173 2484801 command_runner.go:130] > # hooks_dir = [
	I0116 04:27:59.970181 2484801 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 04:27:59.970185 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.970193 2484801 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 04:27:59.970203 2484801 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 04:27:59.970212 2484801 command_runner.go:130] > # its default mounts from the following two files:
	I0116 04:27:59.970220 2484801 command_runner.go:130] > #
	I0116 04:27:59.970227 2484801 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 04:27:59.970237 2484801 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 04:27:59.970246 2484801 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 04:27:59.970249 2484801 command_runner.go:130] > #
	I0116 04:27:59.970257 2484801 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 04:27:59.970269 2484801 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 04:27:59.970277 2484801 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 04:27:59.970285 2484801 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 04:27:59.970289 2484801 command_runner.go:130] > #
	I0116 04:27:59.970294 2484801 command_runner.go:130] > # default_mounts_file = ""
	I0116 04:27:59.970303 2484801 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 04:27:59.970313 2484801 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 04:27:59.970322 2484801 command_runner.go:130] > # pids_limit = 0
	I0116 04:27:59.970329 2484801 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 04:27:59.970341 2484801 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 04:27:59.970349 2484801 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 04:27:59.970360 2484801 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 04:27:59.970369 2484801 command_runner.go:130] > # log_size_max = -1
	I0116 04:27:59.970377 2484801 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 04:27:59.970382 2484801 command_runner.go:130] > # log_to_journald = false
	I0116 04:27:59.970392 2484801 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 04:27:59.970398 2484801 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 04:27:59.970406 2484801 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 04:27:59.970414 2484801 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 04:27:59.970425 2484801 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 04:27:59.970430 2484801 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 04:27:59.970437 2484801 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 04:27:59.970444 2484801 command_runner.go:130] > # read_only = false
	I0116 04:27:59.970452 2484801 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 04:27:59.970461 2484801 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 04:27:59.970469 2484801 command_runner.go:130] > # live configuration reload.
	I0116 04:27:59.970474 2484801 command_runner.go:130] > # log_level = "info"
	I0116 04:27:59.970481 2484801 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 04:27:59.970490 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:27:59.970495 2484801 command_runner.go:130] > # log_filter = ""
	I0116 04:27:59.970503 2484801 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 04:27:59.970510 2484801 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 04:27:59.970517 2484801 command_runner.go:130] > # separated by comma.
	I0116 04:27:59.970522 2484801 command_runner.go:130] > # uid_mappings = ""
	I0116 04:27:59.970529 2484801 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 04:27:59.970539 2484801 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 04:27:59.970544 2484801 command_runner.go:130] > # separated by comma.
	I0116 04:27:59.970551 2484801 command_runner.go:130] > # gid_mappings = ""
	I0116 04:27:59.970559 2484801 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 04:27:59.970568 2484801 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 04:27:59.970578 2484801 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 04:27:59.970583 2484801 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 04:27:59.970591 2484801 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 04:27:59.970601 2484801 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 04:27:59.970608 2484801 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 04:27:59.970619 2484801 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 04:27:59.970626 2484801 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 04:27:59.970636 2484801 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 04:27:59.970644 2484801 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 04:27:59.970654 2484801 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 04:27:59.970661 2484801 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 04:27:59.970669 2484801 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 04:27:59.970677 2484801 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 04:27:59.970684 2484801 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 04:27:59.970691 2484801 command_runner.go:130] > # drop_infra_ctr = true
	I0116 04:27:59.970698 2484801 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 04:27:59.970705 2484801 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 04:27:59.970716 2484801 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 04:27:59.970721 2484801 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 04:27:59.970731 2484801 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 04:27:59.970746 2484801 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 04:27:59.970755 2484801 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 04:27:59.970764 2484801 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 04:27:59.970779 2484801 command_runner.go:130] > # pinns_path = ""
	I0116 04:27:59.970786 2484801 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 04:27:59.970794 2484801 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 04:27:59.970805 2484801 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 04:27:59.970810 2484801 command_runner.go:130] > # default_runtime = "runc"
	I0116 04:27:59.970817 2484801 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 04:27:59.970826 2484801 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 04:27:59.970839 2484801 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 04:27:59.970845 2484801 command_runner.go:130] > # creation as a file is not desired either.
	I0116 04:27:59.970857 2484801 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 04:27:59.970863 2484801 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 04:27:59.970871 2484801 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 04:27:59.970875 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.970883 2484801 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 04:27:59.970894 2484801 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 04:27:59.970904 2484801 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 04:27:59.970911 2484801 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 04:27:59.970918 2484801 command_runner.go:130] > #
	I0116 04:27:59.970923 2484801 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 04:27:59.970929 2484801 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 04:27:59.970936 2484801 command_runner.go:130] > #  runtime_type = "oci"
	I0116 04:27:59.970944 2484801 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 04:27:59.970953 2484801 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 04:27:59.970958 2484801 command_runner.go:130] > #  allowed_annotations = []
	I0116 04:27:59.970964 2484801 command_runner.go:130] > # Where:
	I0116 04:27:59.970971 2484801 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 04:27:59.970982 2484801 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 04:27:59.970993 2484801 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 04:27:59.971000 2484801 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 04:27:59.971008 2484801 command_runner.go:130] > #   in $PATH.
	I0116 04:27:59.971016 2484801 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 04:27:59.971022 2484801 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 04:27:59.971032 2484801 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 04:27:59.971042 2484801 command_runner.go:130] > #   state.
	I0116 04:27:59.971054 2484801 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 04:27:59.971063 2484801 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 04:27:59.971071 2484801 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 04:27:59.971080 2484801 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 04:27:59.971087 2484801 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 04:27:59.971095 2484801 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 04:27:59.971104 2484801 command_runner.go:130] > #   The currently recognized values are:
	I0116 04:27:59.971112 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 04:27:59.971124 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 04:27:59.971132 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 04:27:59.971141 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 04:27:59.971151 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 04:27:59.971161 2484801 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 04:27:59.971168 2484801 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 04:27:59.971177 2484801 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 04:27:59.971186 2484801 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 04:27:59.971191 2484801 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 04:27:59.971199 2484801 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0116 04:27:59.971207 2484801 command_runner.go:130] > runtime_type = "oci"
	I0116 04:27:59.971212 2484801 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 04:27:59.971217 2484801 command_runner.go:130] > runtime_config_path = ""
	I0116 04:27:59.971222 2484801 command_runner.go:130] > monitor_path = ""
	I0116 04:27:59.971229 2484801 command_runner.go:130] > monitor_cgroup = ""
	I0116 04:27:59.971234 2484801 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 04:27:59.971273 2484801 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 04:27:59.971283 2484801 command_runner.go:130] > # running containers
	I0116 04:27:59.971288 2484801 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 04:27:59.971296 2484801 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 04:27:59.971304 2484801 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 04:27:59.971314 2484801 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 04:27:59.971320 2484801 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 04:27:59.971326 2484801 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 04:27:59.971334 2484801 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 04:27:59.971340 2484801 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 04:27:59.971349 2484801 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 04:27:59.971357 2484801 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 04:27:59.971367 2484801 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 04:27:59.971374 2484801 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 04:27:59.971382 2484801 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 04:27:59.971393 2484801 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 04:27:59.971402 2484801 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 04:27:59.971413 2484801 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 04:27:59.971423 2484801 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 04:27:59.971439 2484801 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 04:27:59.971449 2484801 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 04:27:59.971458 2484801 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 04:27:59.971462 2484801 command_runner.go:130] > # Example:
	I0116 04:27:59.971470 2484801 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 04:27:59.971476 2484801 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 04:27:59.971485 2484801 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 04:27:59.971491 2484801 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 04:27:59.971495 2484801 command_runner.go:130] > # cpuset = 0
	I0116 04:27:59.971503 2484801 command_runner.go:130] > # cpushares = "0-1"
	I0116 04:27:59.971508 2484801 command_runner.go:130] > # Where:
	I0116 04:27:59.971514 2484801 command_runner.go:130] > # The workload name is workload-type.
	I0116 04:27:59.971525 2484801 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 04:27:59.971532 2484801 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 04:27:59.971539 2484801 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 04:27:59.971551 2484801 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 04:27:59.971562 2484801 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 04:27:59.971566 2484801 command_runner.go:130] > # 
	I0116 04:27:59.971573 2484801 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 04:27:59.971580 2484801 command_runner.go:130] > #
	I0116 04:27:59.971587 2484801 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 04:27:59.971594 2484801 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 04:27:59.971605 2484801 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 04:27:59.971612 2484801 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 04:27:59.971620 2484801 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 04:27:59.971626 2484801 command_runner.go:130] > [crio.image]
	I0116 04:27:59.971634 2484801 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 04:27:59.971639 2484801 command_runner.go:130] > # default_transport = "docker://"
	I0116 04:27:59.971649 2484801 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 04:27:59.971661 2484801 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 04:27:59.971666 2484801 command_runner.go:130] > # global_auth_file = ""
	I0116 04:27:59.971675 2484801 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 04:27:59.971681 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:27:59.971689 2484801 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 04:27:59.971697 2484801 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 04:27:59.971706 2484801 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 04:27:59.971712 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:27:59.971718 2484801 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 04:27:59.971728 2484801 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 04:27:59.971735 2484801 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 04:27:59.971746 2484801 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 04:27:59.971753 2484801 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 04:27:59.971761 2484801 command_runner.go:130] > # pause_command = "/pause"
	I0116 04:27:59.971768 2484801 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 04:27:59.971776 2484801 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 04:27:59.971785 2484801 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 04:27:59.971796 2484801 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 04:27:59.971806 2484801 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 04:27:59.971811 2484801 command_runner.go:130] > # signature_policy = ""
	I0116 04:27:59.971819 2484801 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 04:27:59.971830 2484801 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 04:27:59.971835 2484801 command_runner.go:130] > # changing them here.
	I0116 04:27:59.971844 2484801 command_runner.go:130] > # insecure_registries = [
	I0116 04:27:59.971848 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.971856 2484801 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 04:27:59.971863 2484801 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 04:27:59.971870 2484801 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 04:27:59.971876 2484801 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 04:27:59.971882 2484801 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 04:27:59.971891 2484801 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 04:27:59.971898 2484801 command_runner.go:130] > # CNI plugins.
	I0116 04:27:59.971902 2484801 command_runner.go:130] > [crio.network]
	I0116 04:27:59.971910 2484801 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 04:27:59.971920 2484801 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 04:27:59.971928 2484801 command_runner.go:130] > # cni_default_network = ""
	I0116 04:27:59.971935 2484801 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 04:27:59.971942 2484801 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 04:27:59.971950 2484801 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 04:27:59.971957 2484801 command_runner.go:130] > # plugin_dirs = [
	I0116 04:27:59.971961 2484801 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 04:27:59.971965 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.971973 2484801 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 04:27:59.971980 2484801 command_runner.go:130] > [crio.metrics]
	I0116 04:27:59.971986 2484801 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 04:27:59.971992 2484801 command_runner.go:130] > # enable_metrics = false
	I0116 04:27:59.972000 2484801 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 04:27:59.972005 2484801 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 04:27:59.972013 2484801 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 04:27:59.972020 2484801 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 04:27:59.972030 2484801 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 04:27:59.972035 2484801 command_runner.go:130] > # metrics_collectors = [
	I0116 04:27:59.972040 2484801 command_runner.go:130] > # 	"operations",
	I0116 04:27:59.972049 2484801 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 04:27:59.972058 2484801 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 04:27:59.972063 2484801 command_runner.go:130] > # 	"operations_errors",
	I0116 04:27:59.972070 2484801 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 04:27:59.972078 2484801 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 04:27:59.972086 2484801 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 04:27:59.972091 2484801 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 04:27:59.972096 2484801 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 04:27:59.972102 2484801 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 04:27:59.972108 2484801 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 04:27:59.972114 2484801 command_runner.go:130] > # 	"containers_oom_total",
	I0116 04:27:59.972121 2484801 command_runner.go:130] > # 	"containers_oom",
	I0116 04:27:59.972126 2484801 command_runner.go:130] > # 	"processes_defunct",
	I0116 04:27:59.972130 2484801 command_runner.go:130] > # 	"operations_total",
	I0116 04:27:59.972136 2484801 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 04:27:59.972146 2484801 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 04:27:59.972151 2484801 command_runner.go:130] > # 	"operations_errors_total",
	I0116 04:27:59.972159 2484801 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 04:27:59.972168 2484801 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 04:27:59.972173 2484801 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 04:27:59.972178 2484801 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 04:27:59.972187 2484801 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 04:27:59.972192 2484801 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 04:27:59.972197 2484801 command_runner.go:130] > # ]
	I0116 04:27:59.972203 2484801 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 04:27:59.972210 2484801 command_runner.go:130] > # metrics_port = 9090
	I0116 04:27:59.972216 2484801 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 04:27:59.972221 2484801 command_runner.go:130] > # metrics_socket = ""
	I0116 04:27:59.972230 2484801 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 04:27:59.972238 2484801 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 04:27:59.972247 2484801 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 04:27:59.972253 2484801 command_runner.go:130] > # certificate on any modification event.
	I0116 04:27:59.972260 2484801 command_runner.go:130] > # metrics_cert = ""
	I0116 04:27:59.972267 2484801 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 04:27:59.972274 2484801 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 04:27:59.972281 2484801 command_runner.go:130] > # metrics_key = ""
	I0116 04:27:59.972290 2484801 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 04:27:59.972298 2484801 command_runner.go:130] > [crio.tracing]
	I0116 04:27:59.972305 2484801 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 04:27:59.972310 2484801 command_runner.go:130] > # enable_tracing = false
	I0116 04:27:59.972319 2484801 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 04:27:59.972325 2484801 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 04:27:59.972331 2484801 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 04:27:59.972337 2484801 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 04:27:59.972347 2484801 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 04:27:59.972351 2484801 command_runner.go:130] > [crio.stats]
	I0116 04:27:59.972358 2484801 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 04:27:59.972365 2484801 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 04:27:59.972372 2484801 command_runner.go:130] > # stats_collection_period = 0
	I0116 04:27:59.972857 2484801 cni.go:84] Creating CNI manager for ""
	I0116 04:27:59.972874 2484801 cni.go:136] 1 nodes found, recommending kindnet
	I0116 04:27:59.972905 2484801 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 04:27:59.972926 2484801 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-701570 NodeName:multinode-701570 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 04:27:59.973070 2484801 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-701570"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 04:27:59.973131 2484801 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-701570 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-701570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 04:27:59.973202 2484801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 04:27:59.982919 2484801 command_runner.go:130] > kubeadm
	I0116 04:27:59.982941 2484801 command_runner.go:130] > kubectl
	I0116 04:27:59.982948 2484801 command_runner.go:130] > kubelet
	I0116 04:27:59.984241 2484801 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 04:27:59.984336 2484801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 04:27:59.995103 2484801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0116 04:28:00.023171 2484801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 04:28:00.093210 2484801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0116 04:28:00.123362 2484801 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0116 04:28:00.131045 2484801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:28:00.155696 2484801 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570 for IP: 192.168.58.2
	I0116 04:28:00.155731 2484801 certs.go:190] acquiring lock for shared ca certs: {Name:mkfc28b038850f5c4d343916ed6224daf2d0b70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:00.155918 2484801 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key
	I0116 04:28:00.155963 2484801 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key
	I0116 04:28:00.156019 2484801 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.key
	I0116 04:28:00.156032 2484801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.crt with IP's: []
	I0116 04:28:00.738097 2484801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.crt ...
	I0116 04:28:00.738132 2484801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.crt: {Name:mkd87fce5fe5d1720414ae504ee9bf05a980c34e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:00.738350 2484801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.key ...
	I0116 04:28:00.738368 2484801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.key: {Name:mkd8415f095480c90c34dd250a01b43a4326ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:00.738468 2484801 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.key.cee25041
	I0116 04:28:00.738482 2484801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 04:28:00.941463 2484801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.crt.cee25041 ...
	I0116 04:28:00.941496 2484801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.crt.cee25041: {Name:mk382b64cb0e0518fc9426cb82861d00304ae65b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:00.941685 2484801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.key.cee25041 ...
	I0116 04:28:00.941706 2484801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.key.cee25041: {Name:mk10ceaaae5cfe0015892eb244a1b5e1bc83e7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:00.941790 2484801 certs.go:337] copying /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.crt
	I0116 04:28:00.941869 2484801 certs.go:341] copying /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.key
	I0116 04:28:00.941931 2484801 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.key
	I0116 04:28:00.941953 2484801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.crt with IP's: []
	I0116 04:28:01.091182 2484801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.crt ...
	I0116 04:28:01.091212 2484801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.crt: {Name:mkbb37a82b1147ad58ddff8c0ebebb86046051c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:01.091388 2484801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.key ...
	I0116 04:28:01.091406 2484801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.key: {Name:mke606226881ea082c0662ef85247f1e26a1a3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:01.091499 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 04:28:01.091523 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 04:28:01.091536 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 04:28:01.091547 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 04:28:01.091568 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 04:28:01.091584 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 04:28:01.091598 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 04:28:01.091613 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 04:28:01.091663 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005.pem (1338 bytes)
	W0116 04:28:01.091707 2484801 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005_empty.pem, impossibly tiny 0 bytes
	I0116 04:28:01.091722 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 04:28:01.091756 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem (1078 bytes)
	I0116 04:28:01.091790 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem (1123 bytes)
	I0116 04:28:01.091817 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem (1679 bytes)
	I0116 04:28:01.091894 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem (1708 bytes)
	I0116 04:28:01.091928 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005.pem -> /usr/share/ca-certificates/2421005.pem
	I0116 04:28:01.091947 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem -> /usr/share/ca-certificates/24210052.pem
	I0116 04:28:01.091963 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:01.092648 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 04:28:01.124863 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 04:28:01.154353 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 04:28:01.184155 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 04:28:01.213192 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 04:28:01.240626 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 04:28:01.268218 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 04:28:01.296597 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 04:28:01.325152 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005.pem --> /usr/share/ca-certificates/2421005.pem (1338 bytes)
	I0116 04:28:01.353849 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem --> /usr/share/ca-certificates/24210052.pem (1708 bytes)
	I0116 04:28:01.382623 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 04:28:01.411498 2484801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 04:28:01.432710 2484801 ssh_runner.go:195] Run: openssl version
	I0116 04:28:01.441072 2484801 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0116 04:28:01.441150 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2421005.pem && ln -fs /usr/share/ca-certificates/2421005.pem /etc/ssl/certs/2421005.pem"
	I0116 04:28:01.453095 2484801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2421005.pem
	I0116 04:28:01.457499 2484801 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 04:13 /usr/share/ca-certificates/2421005.pem
	I0116 04:28:01.457833 2484801 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 04:13 /usr/share/ca-certificates/2421005.pem
	I0116 04:28:01.457895 2484801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2421005.pem
	I0116 04:28:01.466448 2484801 command_runner.go:130] > 51391683
	I0116 04:28:01.466541 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2421005.pem /etc/ssl/certs/51391683.0"
	I0116 04:28:01.478091 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24210052.pem && ln -fs /usr/share/ca-certificates/24210052.pem /etc/ssl/certs/24210052.pem"
	I0116 04:28:01.491052 2484801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24210052.pem
	I0116 04:28:01.495783 2484801 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 04:13 /usr/share/ca-certificates/24210052.pem
	I0116 04:28:01.495816 2484801 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 04:13 /usr/share/ca-certificates/24210052.pem
	I0116 04:28:01.495881 2484801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24210052.pem
	I0116 04:28:01.504241 2484801 command_runner.go:130] > 3ec20f2e
	I0116 04:28:01.504685 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24210052.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 04:28:01.516538 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 04:28:01.528268 2484801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:01.532949 2484801 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 04:06 /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:01.532996 2484801 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 04:06 /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:01.533061 2484801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:01.541728 2484801 command_runner.go:130] > b5213941
	I0116 04:28:01.542201 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 04:28:01.553987 2484801 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 04:28:01.558394 2484801 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 04:28:01.558437 2484801 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 04:28:01.558478 2484801 kubeadm.go:404] StartCluster: {Name:multinode-701570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-701570 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:28:01.558553 2484801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0116 04:28:01.558633 2484801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0116 04:28:01.600267 2484801 cri.go:89] found id: ""
	I0116 04:28:01.600343 2484801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 04:28:01.611496 2484801 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0116 04:28:01.611527 2484801 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0116 04:28:01.611536 2484801 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0116 04:28:01.611631 2484801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 04:28:01.622463 2484801 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0116 04:28:01.622531 2484801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 04:28:01.633926 2484801 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 04:28:01.633955 2484801 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 04:28:01.633967 2484801 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 04:28:01.633977 2484801 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 04:28:01.634028 2484801 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 04:28:01.634063 2484801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0116 04:28:01.687228 2484801 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 04:28:01.687257 2484801 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0116 04:28:01.687329 2484801 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 04:28:01.687344 2484801 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 04:28:01.733609 2484801 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0116 04:28:01.733642 2484801 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0116 04:28:01.733695 2484801 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0116 04:28:01.733704 2484801 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0116 04:28:01.733736 2484801 kubeadm.go:322] OS: Linux
	I0116 04:28:01.733745 2484801 command_runner.go:130] > OS: Linux
	I0116 04:28:01.733787 2484801 kubeadm.go:322] CGROUPS_CPU: enabled
	I0116 04:28:01.733796 2484801 command_runner.go:130] > CGROUPS_CPU: enabled
	I0116 04:28:01.733841 2484801 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0116 04:28:01.733849 2484801 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0116 04:28:01.733893 2484801 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0116 04:28:01.733902 2484801 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0116 04:28:01.733946 2484801 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0116 04:28:01.733955 2484801 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0116 04:28:01.734015 2484801 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0116 04:28:01.734027 2484801 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0116 04:28:01.734075 2484801 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0116 04:28:01.734085 2484801 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0116 04:28:01.734127 2484801 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0116 04:28:01.734135 2484801 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0116 04:28:01.734180 2484801 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0116 04:28:01.734188 2484801 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0116 04:28:01.734231 2484801 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0116 04:28:01.734239 2484801 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0116 04:28:01.817729 2484801 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 04:28:01.817758 2484801 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 04:28:01.817848 2484801 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 04:28:01.817859 2484801 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 04:28:01.817947 2484801 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 04:28:01.817958 2484801 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 04:28:02.064677 2484801 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 04:28:02.069736 2484801 out.go:204]   - Generating certificates and keys ...
	I0116 04:28:02.064782 2484801 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 04:28:02.069818 2484801 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 04:28:02.069829 2484801 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 04:28:02.070600 2484801 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 04:28:02.070613 2484801 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 04:28:02.306275 2484801 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 04:28:02.306352 2484801 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 04:28:02.938611 2484801 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 04:28:02.938651 2484801 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0116 04:28:03.189080 2484801 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 04:28:03.189109 2484801 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0116 04:28:03.369900 2484801 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 04:28:03.369931 2484801 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0116 04:28:03.738394 2484801 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 04:28:03.738421 2484801 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0116 04:28:03.738603 2484801 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-701570] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 04:28:03.738612 2484801 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-701570] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 04:28:04.615182 2484801 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 04:28:04.615214 2484801 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0116 04:28:04.615331 2484801 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-701570] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 04:28:04.615340 2484801 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-701570] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0116 04:28:05.527372 2484801 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 04:28:05.527398 2484801 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 04:28:05.920441 2484801 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 04:28:05.920472 2484801 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 04:28:06.356440 2484801 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 04:28:06.356467 2484801 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0116 04:28:06.356712 2484801 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 04:28:06.356724 2484801 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 04:28:06.986067 2484801 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 04:28:06.986098 2484801 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 04:28:07.605213 2484801 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 04:28:07.605239 2484801 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 04:28:08.168444 2484801 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 04:28:08.168449 2484801 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 04:28:08.427165 2484801 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 04:28:08.427197 2484801 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 04:28:08.427957 2484801 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 04:28:08.427974 2484801 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 04:28:08.430578 2484801 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 04:28:08.432788 2484801 out.go:204]   - Booting up control plane ...
	I0116 04:28:08.430675 2484801 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 04:28:08.432881 2484801 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 04:28:08.432891 2484801 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 04:28:08.432962 2484801 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 04:28:08.432967 2484801 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 04:28:08.435226 2484801 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 04:28:08.435247 2484801 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 04:28:08.445804 2484801 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 04:28:08.445829 2484801 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 04:28:08.446607 2484801 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 04:28:08.446621 2484801 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 04:28:08.446848 2484801 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 04:28:08.446860 2484801 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 04:28:08.539565 2484801 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 04:28:08.539605 2484801 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 04:28:17.042457 2484801 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502945 seconds
	I0116 04:28:17.042484 2484801 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.502945 seconds
	I0116 04:28:17.042584 2484801 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 04:28:17.042589 2484801 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 04:28:17.057342 2484801 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 04:28:17.057371 2484801 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 04:28:17.586712 2484801 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 04:28:17.586752 2484801 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0116 04:28:17.586935 2484801 kubeadm.go:322] [mark-control-plane] Marking the node multinode-701570 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 04:28:17.586942 2484801 command_runner.go:130] > [mark-control-plane] Marking the node multinode-701570 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 04:28:18.099512 2484801 kubeadm.go:322] [bootstrap-token] Using token: jqt4dr.8o9l4lbiz8hydwsq
	I0116 04:28:18.101350 2484801 out.go:204]   - Configuring RBAC rules ...
	I0116 04:28:18.099633 2484801 command_runner.go:130] > [bootstrap-token] Using token: jqt4dr.8o9l4lbiz8hydwsq
	I0116 04:28:18.101488 2484801 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 04:28:18.101511 2484801 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 04:28:18.107311 2484801 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 04:28:18.107336 2484801 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 04:28:18.118842 2484801 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 04:28:18.118872 2484801 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 04:28:18.123097 2484801 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 04:28:18.123121 2484801 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 04:28:18.126921 2484801 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 04:28:18.126943 2484801 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 04:28:18.130675 2484801 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 04:28:18.130698 2484801 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 04:28:18.147385 2484801 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 04:28:18.147408 2484801 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 04:28:18.408077 2484801 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 04:28:18.408100 2484801 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 04:28:18.548426 2484801 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 04:28:18.548449 2484801 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 04:28:18.548456 2484801 kubeadm.go:322] 
	I0116 04:28:18.548512 2484801 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 04:28:18.548517 2484801 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0116 04:28:18.548521 2484801 kubeadm.go:322] 
	I0116 04:28:18.548595 2484801 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 04:28:18.548600 2484801 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0116 04:28:18.548605 2484801 kubeadm.go:322] 
	I0116 04:28:18.548628 2484801 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 04:28:18.548633 2484801 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0116 04:28:18.548687 2484801 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 04:28:18.548692 2484801 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 04:28:18.548738 2484801 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 04:28:18.548743 2484801 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 04:28:18.548761 2484801 kubeadm.go:322] 
	I0116 04:28:18.548814 2484801 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 04:28:18.548819 2484801 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0116 04:28:18.548822 2484801 kubeadm.go:322] 
	I0116 04:28:18.548867 2484801 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 04:28:18.548871 2484801 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 04:28:18.548875 2484801 kubeadm.go:322] 
	I0116 04:28:18.548924 2484801 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 04:28:18.548928 2484801 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0116 04:28:18.549005 2484801 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 04:28:18.549011 2484801 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 04:28:18.549074 2484801 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 04:28:18.549078 2484801 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 04:28:18.549082 2484801 kubeadm.go:322] 
	I0116 04:28:18.549161 2484801 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 04:28:18.549165 2484801 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0116 04:28:18.549236 2484801 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 04:28:18.549240 2484801 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0116 04:28:18.549244 2484801 kubeadm.go:322] 
	I0116 04:28:18.549322 2484801 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jqt4dr.8o9l4lbiz8hydwsq \
	I0116 04:28:18.549327 2484801 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token jqt4dr.8o9l4lbiz8hydwsq \
	I0116 04:28:18.549422 2484801 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad \
	I0116 04:28:18.549427 2484801 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad \
	I0116 04:28:18.549446 2484801 kubeadm.go:322] 	--control-plane 
	I0116 04:28:18.549450 2484801 command_runner.go:130] > 	--control-plane 
	I0116 04:28:18.549454 2484801 kubeadm.go:322] 
	I0116 04:28:18.549533 2484801 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 04:28:18.549541 2484801 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0116 04:28:18.549545 2484801 kubeadm.go:322] 
	I0116 04:28:18.549622 2484801 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jqt4dr.8o9l4lbiz8hydwsq \
	I0116 04:28:18.549626 2484801 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token jqt4dr.8o9l4lbiz8hydwsq \
	I0116 04:28:18.549721 2484801 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad 
	I0116 04:28:18.549725 2484801 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad 
	I0116 04:28:18.553514 2484801 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 04:28:18.553540 2484801 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 04:28:18.553651 2484801 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 04:28:18.553657 2484801 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 04:28:18.553732 2484801 cni.go:84] Creating CNI manager for ""
	I0116 04:28:18.553770 2484801 cni.go:136] 1 nodes found, recommending kindnet
	I0116 04:28:18.557531 2484801 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 04:28:18.559310 2484801 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 04:28:18.575221 2484801 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 04:28:18.575243 2484801 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0116 04:28:18.575250 2484801 command_runner.go:130] > Device: 3ah/58d	Inode: 1827011     Links: 1
	I0116 04:28:18.575258 2484801 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 04:28:18.575264 2484801 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0116 04:28:18.575270 2484801 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0116 04:28:18.575276 2484801 command_runner.go:130] > Change: 2024-01-16 04:06:01.868348216 +0000
	I0116 04:28:18.575299 2484801 command_runner.go:130] >  Birth: 2024-01-16 04:06:01.824349333 +0000
	I0116 04:28:18.583090 2484801 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 04:28:18.583109 2484801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 04:28:18.637244 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 04:28:19.473530 2484801 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0116 04:28:19.479744 2484801 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0116 04:28:19.488398 2484801 command_runner.go:130] > serviceaccount/kindnet created
	I0116 04:28:19.503164 2484801 command_runner.go:130] > daemonset.apps/kindnet created
	I0116 04:28:19.508392 2484801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 04:28:19.508526 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:19.508613 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=multinode-701570 minikube.k8s.io/updated_at=2024_01_16T04_28_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:19.649749 2484801 command_runner.go:130] > node/multinode-701570 labeled
	I0116 04:28:19.653185 2484801 command_runner.go:130] > -16
	I0116 04:28:19.653222 2484801 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0116 04:28:19.653251 2484801 ops.go:34] apiserver oom_adj: -16
	I0116 04:28:19.653335 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:19.783164 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:20.153950 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:20.247373 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:20.654320 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:20.743774 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:21.154349 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:21.243677 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:21.654293 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:21.744098 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:22.153986 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:22.245790 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:22.654279 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:22.747498 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:23.153974 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:23.248776 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:23.654140 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:23.747184 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:24.154289 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:24.245559 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:24.654014 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:24.749504 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:25.154306 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:25.244518 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:25.654426 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:25.745587 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:26.153932 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:26.243450 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:26.653998 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:26.742807 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:27.154063 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:27.243793 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:27.654473 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:27.744091 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:28.153537 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:28.240676 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:28.653476 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:28.762909 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:29.154348 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:29.245314 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:29.653910 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:29.745159 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:30.154000 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:30.264993 2484801 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 04:28:30.653468 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:30.750160 2484801 command_runner.go:130] > NAME      SECRETS   AGE
	I0116 04:28:30.751162 2484801 command_runner.go:130] > default   0         0s
	I0116 04:28:30.754656 2484801 kubeadm.go:1088] duration metric: took 11.246173486s to wait for elevateKubeSystemPrivileges.
	I0116 04:28:30.754680 2484801 kubeadm.go:406] StartCluster complete in 29.19620608s
	I0116 04:28:30.754697 2484801 settings.go:142] acquiring lock: {Name:mk66adae4842b25a93c5566bbfd72e0abd3ff5ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:30.754758 2484801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:28:30.755480 2484801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/kubeconfig: {Name:mk62b61676cf27f7a957a454bc1b05d015789bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:30.755932 2484801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:28:30.756195 2484801 kapi.go:59] client config for multinode-701570: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.key", CAFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 04:28:30.756920 2484801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 04:28:30.757256 2484801 config.go:182] Loaded profile config "multinode-701570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:28:30.757358 2484801 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 04:28:30.757420 2484801 addons.go:69] Setting storage-provisioner=true in profile "multinode-701570"
	I0116 04:28:30.757435 2484801 addons.go:234] Setting addon storage-provisioner=true in "multinode-701570"
	I0116 04:28:30.757472 2484801 host.go:66] Checking if "multinode-701570" exists ...
	I0116 04:28:30.757923 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570 --format={{.State.Status}}
	I0116 04:28:30.758484 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 04:28:30.758529 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:30.758552 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:30.758575 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:30.758809 2484801 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 04:28:30.759258 2484801 addons.go:69] Setting default-storageclass=true in profile "multinode-701570"
	I0116 04:28:30.759303 2484801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-701570"
	I0116 04:28:30.759635 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570 --format={{.State.Status}}
	I0116 04:28:30.786545 2484801 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0116 04:28:30.786566 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:30.786575 2484801 round_trippers.go:580]     Audit-Id: 84b3f89e-161a-46d1-98e0-4dd06482e2e5
	I0116 04:28:30.786581 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:30.786587 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:30.786593 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:30.786599 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:30.786612 2484801 round_trippers.go:580]     Content-Length: 291
	I0116 04:28:30.786618 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:30 GMT
	I0116 04:28:30.786645 2484801 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0895626e-095c-45ca-93ec-399da9451bea","resourceVersion":"267","creationTimestamp":"2024-01-16T04:28:18Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 04:28:30.787091 2484801 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0895626e-095c-45ca-93ec-399da9451bea","resourceVersion":"267","creationTimestamp":"2024-01-16T04:28:18Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 04:28:30.787139 2484801 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 04:28:30.787145 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:30.787152 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:30.787159 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:30.787166 2484801 round_trippers.go:473]     Content-Type: application/json
	I0116 04:28:30.811211 2484801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 04:28:30.817411 2484801 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 04:28:30.817428 2484801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 04:28:30.817483 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:28:30.810587 2484801 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0116 04:28:30.817697 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:30.817706 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:30 GMT
	I0116 04:28:30.817713 2484801 round_trippers.go:580]     Audit-Id: 840f34a7-8450-42d0-8dba-cd9c5dd5fcc5
	I0116 04:28:30.817720 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:30.817732 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:30.817744 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:30.817751 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:30.817757 2484801 round_trippers.go:580]     Content-Length: 291
	I0116 04:28:30.817779 2484801 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0895626e-095c-45ca-93ec-399da9451bea","resourceVersion":"342","creationTimestamp":"2024-01-16T04:28:18Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 04:28:30.817272 2484801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:28:30.818100 2484801 kapi.go:59] client config for multinode-701570: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.key", CAFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 04:28:30.818366 2484801 addons.go:234] Setting addon default-storageclass=true in "multinode-701570"
	I0116 04:28:30.818393 2484801 host.go:66] Checking if "multinode-701570" exists ...
	I0116 04:28:30.818851 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570 --format={{.State.Status}}
	I0116 04:28:30.862318 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35391 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa Username:docker}
	I0116 04:28:30.864794 2484801 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 04:28:30.864812 2484801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 04:28:30.864879 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:28:30.898412 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35391 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa Username:docker}
	I0116 04:28:31.005669 2484801 command_runner.go:130] > apiVersion: v1
	I0116 04:28:31.005689 2484801 command_runner.go:130] > data:
	I0116 04:28:31.005695 2484801 command_runner.go:130] >   Corefile: |
	I0116 04:28:31.005699 2484801 command_runner.go:130] >     .:53 {
	I0116 04:28:31.005704 2484801 command_runner.go:130] >         errors
	I0116 04:28:31.005710 2484801 command_runner.go:130] >         health {
	I0116 04:28:31.005716 2484801 command_runner.go:130] >            lameduck 5s
	I0116 04:28:31.005720 2484801 command_runner.go:130] >         }
	I0116 04:28:31.005725 2484801 command_runner.go:130] >         ready
	I0116 04:28:31.005732 2484801 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 04:28:31.005738 2484801 command_runner.go:130] >            pods insecure
	I0116 04:28:31.005745 2484801 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 04:28:31.005750 2484801 command_runner.go:130] >            ttl 30
	I0116 04:28:31.005760 2484801 command_runner.go:130] >         }
	I0116 04:28:31.005766 2484801 command_runner.go:130] >         prometheus :9153
	I0116 04:28:31.005772 2484801 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 04:28:31.005777 2484801 command_runner.go:130] >            max_concurrent 1000
	I0116 04:28:31.005782 2484801 command_runner.go:130] >         }
	I0116 04:28:31.005786 2484801 command_runner.go:130] >         cache 30
	I0116 04:28:31.005791 2484801 command_runner.go:130] >         loop
	I0116 04:28:31.005795 2484801 command_runner.go:130] >         reload
	I0116 04:28:31.005800 2484801 command_runner.go:130] >         loadbalance
	I0116 04:28:31.005805 2484801 command_runner.go:130] >     }
	I0116 04:28:31.005810 2484801 command_runner.go:130] > kind: ConfigMap
	I0116 04:28:31.005814 2484801 command_runner.go:130] > metadata:
	I0116 04:28:31.005824 2484801 command_runner.go:130] >   creationTimestamp: "2024-01-16T04:28:18Z"
	I0116 04:28:31.005829 2484801 command_runner.go:130] >   name: coredns
	I0116 04:28:31.005834 2484801 command_runner.go:130] >   namespace: kube-system
	I0116 04:28:31.005844 2484801 command_runner.go:130] >   resourceVersion: "263"
	I0116 04:28:31.005850 2484801 command_runner.go:130] >   uid: 52f5a5e0-8838-40db-b9c2-323b833d198b
	I0116 04:28:31.005987 2484801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 04:28:31.043211 2484801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 04:28:31.098631 2484801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 04:28:31.258814 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 04:28:31.258852 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:31.258862 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:31.258870 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:31.319323 2484801 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0116 04:28:31.319349 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:31.319358 2484801 round_trippers.go:580]     Content-Length: 291
	I0116 04:28:31.319365 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:31 GMT
	I0116 04:28:31.319372 2484801 round_trippers.go:580]     Audit-Id: ff83d849-0430-418a-bbef-1fb4efc4349c
	I0116 04:28:31.319378 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:31.319384 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:31.319390 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:31.319400 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:31.322689 2484801 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0895626e-095c-45ca-93ec-399da9451bea","resourceVersion":"367","creationTimestamp":"2024-01-16T04:28:18Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 04:28:31.322829 2484801 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-701570" context rescaled to 1 replicas
	I0116 04:28:31.322868 2484801 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0116 04:28:31.324881 2484801 out.go:177] * Verifying Kubernetes components...
	I0116 04:28:31.326680 2484801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:28:31.711549 2484801 command_runner.go:130] > configmap/coredns replaced
	I0116 04:28:31.717525 2484801 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0116 04:28:31.809784 2484801 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0116 04:28:31.817592 2484801 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0116 04:28:31.827354 2484801 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 04:28:31.837227 2484801 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 04:28:31.848881 2484801 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0116 04:28:31.873278 2484801 command_runner.go:130] > pod/storage-provisioner created
	I0116 04:28:31.879140 2484801 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0116 04:28:31.879332 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0116 04:28:31.879369 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:31.879391 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:31.879445 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:31.879626 2484801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:28:31.879901 2484801 kapi.go:59] client config for multinode-701570: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.key", CAFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 04:28:31.880219 2484801 node_ready.go:35] waiting up to 6m0s for node "multinode-701570" to be "Ready" ...
	I0116 04:28:31.880310 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:31.880322 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:31.880330 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:31.880337 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:31.904655 2484801 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0116 04:28:31.904678 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:31.904687 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:31 GMT
	I0116 04:28:31.904694 2484801 round_trippers.go:580]     Audit-Id: e02f33e3-f926-428e-afa4-51d8bdd2596c
	I0116 04:28:31.904700 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:31.904706 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:31.904721 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:31.904735 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:31.912821 2484801 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0116 04:28:31.912847 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:31.912857 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:31.912863 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:31.912870 2484801 round_trippers.go:580]     Content-Length: 1273
	I0116 04:28:31.912877 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:31 GMT
	I0116 04:28:31.912883 2484801 round_trippers.go:580]     Audit-Id: 7f7a275d-af1d-4dfd-a605-a6f8fe7d5a0c
	I0116 04:28:31.912892 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:31.912898 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:31.913871 2484801 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"5a81007f-508d-4ab3-881e-e0009f42f81a","resourceVersion":"383","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0116 04:28:31.914309 2484801 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5a81007f-508d-4ab3-881e-e0009f42f81a","resourceVersion":"383","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 04:28:31.914365 2484801 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0116 04:28:31.914376 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:31.914384 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:31.914391 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:31.914401 2484801 round_trippers.go:473]     Content-Type: application/json
	I0116 04:28:31.914531 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"360","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 04:28:31.941152 2484801 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0116 04:28:31.941179 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:31.941189 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:31 GMT
	I0116 04:28:31.941196 2484801 round_trippers.go:580]     Audit-Id: 5fe96c62-025a-407f-90fa-9eba1d06d37a
	I0116 04:28:31.941202 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:31.941208 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:31.941219 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:31.941225 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:31.941234 2484801 round_trippers.go:580]     Content-Length: 1220
	I0116 04:28:31.941470 2484801 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5a81007f-508d-4ab3-881e-e0009f42f81a","resourceVersion":"383","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 04:28:31.944084 2484801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 04:28:31.946059 2484801 addons.go:505] enable addons completed in 1.188692028s: enabled=[storage-provisioner default-storageclass]
	I0116 04:28:32.381124 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:32.381148 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:32.381158 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:32.381166 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:32.383830 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:32.383858 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:32.383868 2484801 round_trippers.go:580]     Audit-Id: bc700a68-bf61-480d-a023-6b4358fa1580
	I0116 04:28:32.383875 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:32.383901 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:32.383916 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:32.383924 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:32.383935 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:32 GMT
	I0116 04:28:32.384092 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"360","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0116 04:28:32.880745 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:32.880803 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:32.880826 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:32.880834 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:32.883315 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:32.883379 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:32.883404 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:32.883428 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:32 GMT
	I0116 04:28:32.883475 2484801 round_trippers.go:580]     Audit-Id: 5ffae7e1-fe4f-4b79-b81b-ae3b82f9f33c
	I0116 04:28:32.883491 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:32.883498 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:32.883504 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:32.883612 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:28:32.884020 2484801 node_ready.go:49] node "multinode-701570" has status "Ready":"True"
	I0116 04:28:32.884036 2484801 node_ready.go:38] duration metric: took 1.003799662s waiting for node "multinode-701570" to be "Ready" ...
	I0116 04:28:32.884047 2484801 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 04:28:32.884127 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 04:28:32.884138 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:32.884146 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:32.884154 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:32.888296 2484801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 04:28:32.888329 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:32.888338 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:32.888344 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:32.888351 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:32.888360 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:32.888372 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:32 GMT
	I0116 04:28:32.888379 2484801 round_trippers.go:580]     Audit-Id: a8ba6dfa-3d71-44eb-b9fd-0352b362b442
	I0116 04:28:32.890085 2484801 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-5dd5756b68-hm6kd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0707ad3b-2557-49c2-bdc3-77554baac045","resourceVersion":"417","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"13e7ee15-d416-49ef-a50d-0f96dca51f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13e7ee15-d416-49ef-a50d-0f96dca51f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54150 chars]
	I0116 04:28:32.894533 2484801 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hm6kd" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:32.894651 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hm6kd
	I0116 04:28:32.894663 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:32.894677 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:32.894687 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:32.898211 2484801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 04:28:32.898233 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:32.898242 2484801 round_trippers.go:580]     Audit-Id: 8760c15a-b019-4d68-bb05-c6168a99f43d
	I0116 04:28:32.898249 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:32.898255 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:32.898261 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:32.898269 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:32.898278 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:32 GMT
	I0116 04:28:32.899202 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-hm6kd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0707ad3b-2557-49c2-bdc3-77554baac045","resourceVersion":"417","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"13e7ee15-d416-49ef-a50d-0f96dca51f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13e7ee15-d416-49ef-a50d-0f96dca51f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0116 04:28:32.899737 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:32.899750 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:32.899759 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:32.899770 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:32.905559 2484801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 04:28:32.905583 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:32.905593 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:32.905600 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:32.905607 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:32.905613 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:32.905620 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:32 GMT
	I0116 04:28:32.905629 2484801 round_trippers.go:580]     Audit-Id: a665dc74-6159-4679-96ba-645e20daae15
	I0116 04:28:32.905830 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:28:33.394907 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hm6kd
	I0116 04:28:33.394932 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.394945 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.394952 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.400827 2484801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 04:28:33.400853 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.400862 2484801 round_trippers.go:580]     Audit-Id: 0d5820fb-430b-469f-b3e8-76f5db798bd4
	I0116 04:28:33.400868 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.400875 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.400881 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.400887 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.400898 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.401375 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-hm6kd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0707ad3b-2557-49c2-bdc3-77554baac045","resourceVersion":"417","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"13e7ee15-d416-49ef-a50d-0f96dca51f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13e7ee15-d416-49ef-a50d-0f96dca51f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0116 04:28:33.401915 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:33.401933 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.401943 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.401950 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.404103 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:33.404125 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.404134 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.404141 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.404147 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.404158 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.404167 2484801 round_trippers.go:580]     Audit-Id: 8349fe3b-471a-4616-8dc0-5980b75d260d
	I0116 04:28:33.404174 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.404311 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:28:33.895390 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hm6kd
	I0116 04:28:33.895416 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.895426 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.895433 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.898394 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:33.898421 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.898430 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.898436 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.898442 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.898448 2484801 round_trippers.go:580]     Audit-Id: ffd6a98b-0928-4c42-9097-a55ae905d9e6
	I0116 04:28:33.898455 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.898461 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.898895 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-hm6kd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0707ad3b-2557-49c2-bdc3-77554baac045","resourceVersion":"430","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"13e7ee15-d416-49ef-a50d-0f96dca51f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13e7ee15-d416-49ef-a50d-0f96dca51f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0116 04:28:33.899430 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:33.899448 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.899457 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.899465 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.903620 2484801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 04:28:33.903644 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.903653 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.903660 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.903676 2484801 round_trippers.go:580]     Audit-Id: 15317d65-1ba7-4c68-808b-1728cd7b103d
	I0116 04:28:33.903685 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.903694 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.903705 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.904105 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:28:33.904569 2484801 pod_ready.go:92] pod "coredns-5dd5756b68-hm6kd" in "kube-system" namespace has status "Ready":"True"
	I0116 04:28:33.904591 2484801 pod_ready.go:81] duration metric: took 1.010027048s waiting for pod "coredns-5dd5756b68-hm6kd" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:33.904601 2484801 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:33.904662 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-701570
	I0116 04:28:33.904672 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.904682 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.904689 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.907212 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:33.907277 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.907304 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.907341 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.907367 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.907390 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.907426 2484801 round_trippers.go:580]     Audit-Id: 903aebf4-b693-4fc9-b720-5e2aee529d4d
	I0116 04:28:33.907459 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.907649 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-701570","namespace":"kube-system","uid":"0a5cfa74-94f0-4823-a5a1-5958ed6b1bf0","resourceVersion":"300","creationTimestamp":"2024-01-16T04:28:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cabe73156eb586d028a90186f6f018fa","kubernetes.io/config.mirror":"cabe73156eb586d028a90186f6f018fa","kubernetes.io/config.seen":"2024-01-16T04:28:09.585168419Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0116 04:28:33.908157 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:33.908176 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.908185 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.908193 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.910702 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:33.910723 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.910735 2484801 round_trippers.go:580]     Audit-Id: d5f32dbe-d2f6-44da-b670-38c9fcd3fbff
	I0116 04:28:33.910742 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.910748 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.910754 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.910760 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.910766 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.910904 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:28:33.911315 2484801 pod_ready.go:92] pod "etcd-multinode-701570" in "kube-system" namespace has status "Ready":"True"
	I0116 04:28:33.911336 2484801 pod_ready.go:81] duration metric: took 6.726787ms waiting for pod "etcd-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:33.911349 2484801 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:33.911415 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-701570
	I0116 04:28:33.911428 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.911437 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.911444 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.913791 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:33.913811 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.913819 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.913825 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.913831 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.913837 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.913843 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.913849 2484801 round_trippers.go:580]     Audit-Id: d3c46d52-b05b-4e44-b61c-a18c43fbb3e6
	I0116 04:28:33.914459 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-701570","namespace":"kube-system","uid":"b9356c08-4daf-406f-a670-6a9b9e16f9f5","resourceVersion":"304","creationTimestamp":"2024-01-16T04:28:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"58157bca00bf98e9c1e982a9206a6678","kubernetes.io/config.mirror":"58157bca00bf98e9c1e982a9206a6678","kubernetes.io/config.seen":"2024-01-16T04:28:18.471369511Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0116 04:28:33.915116 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:33.915134 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.915142 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.915150 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.921935 2484801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 04:28:33.921963 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.921973 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.921980 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.921986 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.921993 2484801 round_trippers.go:580]     Audit-Id: eb1966c4-0830-47a2-b730-f7e19781a424
	I0116 04:28:33.921999 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.922006 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.922954 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:28:33.923348 2484801 pod_ready.go:92] pod "kube-apiserver-multinode-701570" in "kube-system" namespace has status "Ready":"True"
	I0116 04:28:33.923366 2484801 pod_ready.go:81] duration metric: took 12.005193ms waiting for pod "kube-apiserver-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:33.923377 2484801 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:33.923449 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-701570
	I0116 04:28:33.923460 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.923468 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.923475 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.926068 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:33.926088 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.926097 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.926103 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.926109 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.926115 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.926121 2484801 round_trippers.go:580]     Audit-Id: 7a69efff-5044-4fed-897d-afaaf63d3c8c
	I0116 04:28:33.926128 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.926251 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-701570","namespace":"kube-system","uid":"99034f3b-f366-4321-9b3e-a956f134b849","resourceVersion":"306","creationTimestamp":"2024-01-16T04:28:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e732cbf0a05a16e89e295e6fc3da387d","kubernetes.io/config.mirror":"e732cbf0a05a16e89e295e6fc3da387d","kubernetes.io/config.seen":"2024-01-16T04:28:18.471370815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0116 04:28:33.926755 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:33.926777 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.926786 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.926794 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.929088 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:33.929110 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.929118 2484801 round_trippers.go:580]     Audit-Id: 5e730946-5f96-4b04-95cb-23bdfca31e55
	I0116 04:28:33.929125 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.929131 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.929141 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.929153 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.929160 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.929309 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:28:33.929674 2484801 pod_ready.go:92] pod "kube-controller-manager-multinode-701570" in "kube-system" namespace has status "Ready":"True"
	I0116 04:28:33.929690 2484801 pod_ready.go:81] duration metric: took 6.299057ms waiting for pod "kube-controller-manager-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:33.929704 2484801 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmnvg" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:33.929765 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmnvg
	I0116 04:28:33.929776 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:33.929783 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:33.929790 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:33.932137 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:33.932186 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:33.932198 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:33.932212 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:33 GMT
	I0116 04:28:33.932219 2484801 round_trippers.go:580]     Audit-Id: db3a5f86-8eea-448a-9931-588509cc695c
	I0116 04:28:33.932231 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:33.932244 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:33.932257 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:33.932371 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zmnvg","generateName":"kube-proxy-","namespace":"kube-system","uid":"49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1","resourceVersion":"408","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bed87baa-dee4-463c-a56f-428fde34fcf2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bed87baa-dee4-463c-a56f-428fde34fcf2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0116 04:28:34.081140 2484801 request.go:629] Waited for 148.283128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:34.081220 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:34.081227 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:34.081236 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:34.081270 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:34.083868 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:34.083892 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:34.083902 2484801 round_trippers.go:580]     Audit-Id: c561faba-95e7-42cc-825c-9ea30e70553f
	I0116 04:28:34.083909 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:34.083915 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:34.083921 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:34.083927 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:34.083938 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:34 GMT
	I0116 04:28:34.084196 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:28:34.084669 2484801 pod_ready.go:92] pod "kube-proxy-zmnvg" in "kube-system" namespace has status "Ready":"True"
	I0116 04:28:34.084690 2484801 pod_ready.go:81] duration metric: took 154.97534ms waiting for pod "kube-proxy-zmnvg" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:34.084736 2484801 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:34.281006 2484801 request.go:629] Waited for 196.187956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-701570
	I0116 04:28:34.281157 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-701570
	I0116 04:28:34.281179 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:34.281217 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:34.281240 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:34.283964 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:34.284084 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:34.284097 2484801 round_trippers.go:580]     Audit-Id: 8b32c30d-14e1-47ad-9883-fbb929f038fd
	I0116 04:28:34.284114 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:34.284127 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:34.284134 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:34.284140 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:34.284150 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:34 GMT
	I0116 04:28:34.284300 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-701570","namespace":"kube-system","uid":"60bf74e8-565d-49eb-98d9-7696c5cb222a","resourceVersion":"302","creationTimestamp":"2024-01-16T04:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8544d8b8e3b1e3a8d0d12fd2af1361e5","kubernetes.io/config.mirror":"8544d8b8e3b1e3a8d0d12fd2af1361e5","kubernetes.io/config.seen":"2024-01-16T04:28:18.471371824Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0116 04:28:34.481458 2484801 request.go:629] Waited for 196.709124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:34.481556 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:28:34.481564 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:34.481574 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:34.481597 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:34.484238 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:34.484307 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:34.484330 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:34.484354 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:34.484392 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:34 GMT
	I0116 04:28:34.484404 2484801 round_trippers.go:580]     Audit-Id: beb18d69-5ab7-4a7e-bfca-c32f99c98ca3
	I0116 04:28:34.484411 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:34.484417 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:34.484539 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:28:34.484959 2484801 pod_ready.go:92] pod "kube-scheduler-multinode-701570" in "kube-system" namespace has status "Ready":"True"
	I0116 04:28:34.484980 2484801 pod_ready.go:81] duration metric: took 400.234297ms waiting for pod "kube-scheduler-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:28:34.484993 2484801 pod_ready.go:38] duration metric: took 1.600929178s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 04:28:34.485010 2484801 api_server.go:52] waiting for apiserver process to appear ...
	I0116 04:28:34.485071 2484801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 04:28:34.498235 2484801 command_runner.go:130] > 1240
	I0116 04:28:34.498269 2484801 api_server.go:72] duration metric: took 3.175371308s to wait for apiserver process to appear ...
	I0116 04:28:34.498279 2484801 api_server.go:88] waiting for apiserver healthz status ...
	I0116 04:28:34.498298 2484801 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0116 04:28:34.507438 2484801 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0116 04:28:34.507516 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0116 04:28:34.507523 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:34.507532 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:34.507539 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:34.509126 2484801 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 04:28:34.509146 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:34.509155 2484801 round_trippers.go:580]     Content-Length: 264
	I0116 04:28:34.509161 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:34 GMT
	I0116 04:28:34.509167 2484801 round_trippers.go:580]     Audit-Id: 7afe0359-fe07-4250-8a1d-5943dfc5bfbb
	I0116 04:28:34.509173 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:34.509180 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:34.509186 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:34.509194 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:34.509569 2484801 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0116 04:28:34.509660 2484801 api_server.go:141] control plane version: v1.28.4
	I0116 04:28:34.509684 2484801 api_server.go:131] duration metric: took 11.398242ms to wait for apiserver health ...
	I0116 04:28:34.509694 2484801 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 04:28:34.680950 2484801 request.go:629] Waited for 171.178195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 04:28:34.681072 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 04:28:34.681103 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:34.681131 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:34.681156 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:34.684896 2484801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 04:28:34.684923 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:34.684937 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:34.684944 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:34.684951 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:34.684960 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:34 GMT
	I0116 04:28:34.684971 2484801 round_trippers.go:580]     Audit-Id: 09c68181-93d8-4c6b-a67f-f791b97862ac
	I0116 04:28:34.684987 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:34.685817 2484801 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-5dd5756b68-hm6kd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0707ad3b-2557-49c2-bdc3-77554baac045","resourceVersion":"430","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"13e7ee15-d416-49ef-a50d-0f96dca51f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13e7ee15-d416-49ef-a50d-0f96dca51f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0116 04:28:34.688641 2484801 system_pods.go:59] 8 kube-system pods found
	I0116 04:28:34.688673 2484801 system_pods.go:61] "coredns-5dd5756b68-hm6kd" [0707ad3b-2557-49c2-bdc3-77554baac045] Running
	I0116 04:28:34.688680 2484801 system_pods.go:61] "etcd-multinode-701570" [0a5cfa74-94f0-4823-a5a1-5958ed6b1bf0] Running
	I0116 04:28:34.688687 2484801 system_pods.go:61] "kindnet-xkvsh" [9653a16d-c4ad-4021-be3b-8e4292b418fc] Running
	I0116 04:28:34.688693 2484801 system_pods.go:61] "kube-apiserver-multinode-701570" [b9356c08-4daf-406f-a670-6a9b9e16f9f5] Running
	I0116 04:28:34.688705 2484801 system_pods.go:61] "kube-controller-manager-multinode-701570" [99034f3b-f366-4321-9b3e-a956f134b849] Running
	I0116 04:28:34.688713 2484801 system_pods.go:61] "kube-proxy-zmnvg" [49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1] Running
	I0116 04:28:34.688719 2484801 system_pods.go:61] "kube-scheduler-multinode-701570" [60bf74e8-565d-49eb-98d9-7696c5cb222a] Running
	I0116 04:28:34.688724 2484801 system_pods.go:61] "storage-provisioner" [afb9aebf-f18a-478d-b561-54bd61c7403a] Running
	I0116 04:28:34.688733 2484801 system_pods.go:74] duration metric: took 179.029991ms to wait for pod list to return data ...
	I0116 04:28:34.688744 2484801 default_sa.go:34] waiting for default service account to be created ...
	I0116 04:28:34.881188 2484801 request.go:629] Waited for 192.334252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 04:28:34.881271 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0116 04:28:34.881281 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:34.881289 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:34.881299 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:34.884004 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:34.884066 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:34.884082 2484801 round_trippers.go:580]     Audit-Id: 4717808d-2a39-4522-840a-c656a7bd06a7
	I0116 04:28:34.884090 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:34.884097 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:34.884103 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:34.884115 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:34.884133 2484801 round_trippers.go:580]     Content-Length: 261
	I0116 04:28:34.884141 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:34 GMT
	I0116 04:28:34.884161 2484801 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9a3ce82d-2512-4616-a300-7e6679ff0142","resourceVersion":"334","creationTimestamp":"2024-01-16T04:28:30Z"}}]}
	I0116 04:28:34.884387 2484801 default_sa.go:45] found service account: "default"
	I0116 04:28:34.884405 2484801 default_sa.go:55] duration metric: took 195.632639ms for default service account to be created ...
	I0116 04:28:34.884417 2484801 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 04:28:35.081849 2484801 request.go:629] Waited for 197.353825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 04:28:35.081944 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 04:28:35.081952 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:35.081962 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:35.081970 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:35.086194 2484801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 04:28:35.086271 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:35.086294 2484801 round_trippers.go:580]     Audit-Id: dd2b0c37-9d18-4043-a980-fac52a70c530
	I0116 04:28:35.086317 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:35.086353 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:35.086380 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:35.086395 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:35.086405 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:35 GMT
	I0116 04:28:35.086853 2484801 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-5dd5756b68-hm6kd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0707ad3b-2557-49c2-bdc3-77554baac045","resourceVersion":"430","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"13e7ee15-d416-49ef-a50d-0f96dca51f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13e7ee15-d416-49ef-a50d-0f96dca51f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0116 04:28:35.089489 2484801 system_pods.go:86] 8 kube-system pods found
	I0116 04:28:35.089523 2484801 system_pods.go:89] "coredns-5dd5756b68-hm6kd" [0707ad3b-2557-49c2-bdc3-77554baac045] Running
	I0116 04:28:35.089530 2484801 system_pods.go:89] "etcd-multinode-701570" [0a5cfa74-94f0-4823-a5a1-5958ed6b1bf0] Running
	I0116 04:28:35.089536 2484801 system_pods.go:89] "kindnet-xkvsh" [9653a16d-c4ad-4021-be3b-8e4292b418fc] Running
	I0116 04:28:35.089541 2484801 system_pods.go:89] "kube-apiserver-multinode-701570" [b9356c08-4daf-406f-a670-6a9b9e16f9f5] Running
	I0116 04:28:35.089546 2484801 system_pods.go:89] "kube-controller-manager-multinode-701570" [99034f3b-f366-4321-9b3e-a956f134b849] Running
	I0116 04:28:35.089552 2484801 system_pods.go:89] "kube-proxy-zmnvg" [49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1] Running
	I0116 04:28:35.089558 2484801 system_pods.go:89] "kube-scheduler-multinode-701570" [60bf74e8-565d-49eb-98d9-7696c5cb222a] Running
	I0116 04:28:35.089564 2484801 system_pods.go:89] "storage-provisioner" [afb9aebf-f18a-478d-b561-54bd61c7403a] Running
	I0116 04:28:35.089572 2484801 system_pods.go:126] duration metric: took 205.14699ms to wait for k8s-apps to be running ...
	I0116 04:28:35.089580 2484801 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 04:28:35.089655 2484801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:28:35.104954 2484801 system_svc.go:56] duration metric: took 15.361112ms WaitForService to wait for kubelet.
	I0116 04:28:35.104996 2484801 kubeadm.go:581] duration metric: took 3.782098283s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 04:28:35.105018 2484801 node_conditions.go:102] verifying NodePressure condition ...
	I0116 04:28:35.281414 2484801 request.go:629] Waited for 176.28401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0116 04:28:35.281470 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0116 04:28:35.281477 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:35.281486 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:35.281496 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:35.284095 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:35.284130 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:35.284139 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:35.284145 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:35.284152 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:35 GMT
	I0116 04:28:35.284158 2484801 round_trippers.go:580]     Audit-Id: 90902ad7-3f72-45d6-8caf-d0396d24d36b
	I0116 04:28:35.284169 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:35.284175 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:35.284354 2484801 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0116 04:28:35.284822 2484801 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 04:28:35.284851 2484801 node_conditions.go:123] node cpu capacity is 2
	I0116 04:28:35.284862 2484801 node_conditions.go:105] duration metric: took 179.83258ms to run NodePressure ...
	I0116 04:28:35.284874 2484801 start.go:228] waiting for startup goroutines ...
	I0116 04:28:35.284884 2484801 start.go:233] waiting for cluster config update ...
	I0116 04:28:35.284894 2484801 start.go:242] writing updated cluster config ...
	I0116 04:28:35.287199 2484801 out.go:177] 
	I0116 04:28:35.289128 2484801 config.go:182] Loaded profile config "multinode-701570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:28:35.289242 2484801 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/config.json ...
	I0116 04:28:35.291288 2484801 out.go:177] * Starting worker node multinode-701570-m02 in cluster multinode-701570
	I0116 04:28:35.293403 2484801 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 04:28:35.295146 2484801 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0116 04:28:35.296970 2484801 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:28:35.297005 2484801 cache.go:56] Caching tarball of preloaded images
	I0116 04:28:35.297039 2484801 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 04:28:35.297099 2484801 preload.go:174] Found /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0116 04:28:35.297109 2484801 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0116 04:28:35.297198 2484801 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/config.json ...
	I0116 04:28:35.314130 2484801 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0116 04:28:35.314154 2484801 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0116 04:28:35.314175 2484801 cache.go:194] Successfully downloaded all kic artifacts
	I0116 04:28:35.314204 2484801 start.go:365] acquiring machines lock for multinode-701570-m02: {Name:mk4bf3c780eb55df931e01c3edf7c25d1974833d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 04:28:35.314404 2484801 start.go:369] acquired machines lock for "multinode-701570-m02" in 107.493µs
	I0116 04:28:35.314429 2484801 start.go:93] Provisioning new machine with config: &{Name:multinode-701570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-701570 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 04:28:35.314511 2484801 start.go:125] createHost starting for "m02" (driver="docker")
	I0116 04:28:35.317930 2484801 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0116 04:28:35.318033 2484801 start.go:159] libmachine.API.Create for "multinode-701570" (driver="docker")
	I0116 04:28:35.318057 2484801 client.go:168] LocalClient.Create starting
	I0116 04:28:35.318124 2484801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem
	I0116 04:28:35.318161 2484801 main.go:141] libmachine: Decoding PEM data...
	I0116 04:28:35.318182 2484801 main.go:141] libmachine: Parsing certificate...
	I0116 04:28:35.318242 2484801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem
	I0116 04:28:35.318266 2484801 main.go:141] libmachine: Decoding PEM data...
	I0116 04:28:35.318284 2484801 main.go:141] libmachine: Parsing certificate...
	I0116 04:28:35.318550 2484801 cli_runner.go:164] Run: docker network inspect multinode-701570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 04:28:35.335483 2484801 network_create.go:77] Found existing network {name:multinode-701570 subnet:0x4000b7c0c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0116 04:28:35.335528 2484801 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-701570-m02" container
	I0116 04:28:35.335606 2484801 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0116 04:28:35.353846 2484801 cli_runner.go:164] Run: docker volume create multinode-701570-m02 --label name.minikube.sigs.k8s.io=multinode-701570-m02 --label created_by.minikube.sigs.k8s.io=true
	I0116 04:28:35.371601 2484801 oci.go:103] Successfully created a docker volume multinode-701570-m02
	I0116 04:28:35.371693 2484801 cli_runner.go:164] Run: docker run --rm --name multinode-701570-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-701570-m02 --entrypoint /usr/bin/test -v multinode-701570-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0116 04:28:35.971010 2484801 oci.go:107] Successfully prepared a docker volume multinode-701570-m02
	I0116 04:28:35.971044 2484801 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:28:35.971063 2484801 kic.go:194] Starting extracting preloaded images to volume ...
	I0116 04:28:35.971144 2484801 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-701570-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0116 04:28:40.301803 2484801 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-701570-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.330621575s)
	I0116 04:28:40.301835 2484801 kic.go:203] duration metric: took 4.330770 seconds to extract preloaded images to volume
	W0116 04:28:40.301974 2484801 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0116 04:28:40.302091 2484801 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0116 04:28:40.369226 2484801 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-701570-m02 --name multinode-701570-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-701570-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-701570-m02 --network multinode-701570 --ip 192.168.58.3 --volume multinode-701570-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0116 04:28:40.738977 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570-m02 --format={{.State.Running}}
	I0116 04:28:40.763750 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570-m02 --format={{.State.Status}}
	I0116 04:28:40.788292 2484801 cli_runner.go:164] Run: docker exec multinode-701570-m02 stat /var/lib/dpkg/alternatives/iptables
	I0116 04:28:40.870701 2484801 oci.go:144] the created container "multinode-701570-m02" has a running status.
	I0116 04:28:40.870726 2484801 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570-m02/id_rsa...
	I0116 04:28:41.353929 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0116 04:28:41.354016 2484801 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0116 04:28:41.382250 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570-m02 --format={{.State.Status}}
	I0116 04:28:41.418923 2484801 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0116 04:28:41.418943 2484801 kic_runner.go:114] Args: [docker exec --privileged multinode-701570-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0116 04:28:41.484999 2484801 cli_runner.go:164] Run: docker container inspect multinode-701570-m02 --format={{.State.Status}}
	I0116 04:28:41.519876 2484801 machine.go:88] provisioning docker machine ...
	I0116 04:28:41.519906 2484801 ubuntu.go:169] provisioning hostname "multinode-701570-m02"
	I0116 04:28:41.519974 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570-m02
	I0116 04:28:41.546285 2484801 main.go:141] libmachine: Using SSH client type: native
	I0116 04:28:41.546705 2484801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35396 <nil> <nil>}
	I0116 04:28:41.546717 2484801 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-701570-m02 && echo "multinode-701570-m02" | sudo tee /etc/hostname
	I0116 04:28:41.727129 2484801 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-701570-m02
	
	I0116 04:28:41.727210 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570-m02
	I0116 04:28:41.752428 2484801 main.go:141] libmachine: Using SSH client type: native
	I0116 04:28:41.752854 2484801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35396 <nil> <nil>}
	I0116 04:28:41.752873 2484801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-701570-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-701570-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-701570-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 04:28:41.898335 2484801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 04:28:41.898362 2484801 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17965-2415678/.minikube CaCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17965-2415678/.minikube}
	I0116 04:28:41.898385 2484801 ubuntu.go:177] setting up certificates
	I0116 04:28:41.898396 2484801 provision.go:83] configureAuth start
	I0116 04:28:41.898460 2484801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701570-m02
	I0116 04:28:41.926358 2484801 provision.go:138] copyHostCerts
	I0116 04:28:41.926395 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem
	I0116 04:28:41.926425 2484801 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem, removing ...
	I0116 04:28:41.926431 2484801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem
	I0116 04:28:41.926507 2484801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.pem (1078 bytes)
	I0116 04:28:41.926579 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem
	I0116 04:28:41.926596 2484801 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem, removing ...
	I0116 04:28:41.926600 2484801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem
	I0116 04:28:41.926625 2484801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/cert.pem (1123 bytes)
	I0116 04:28:41.926664 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem
	I0116 04:28:41.926681 2484801 exec_runner.go:144] found /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem, removing ...
	I0116 04:28:41.926685 2484801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem
	I0116 04:28:41.926707 2484801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17965-2415678/.minikube/key.pem (1679 bytes)
	I0116 04:28:41.926750 2484801 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem org=jenkins.multinode-701570-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-701570-m02]
	I0116 04:28:43.057180 2484801 provision.go:172] copyRemoteCerts
	I0116 04:28:43.057254 2484801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 04:28:43.057299 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570-m02
	I0116 04:28:43.081458 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35396 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570-m02/id_rsa Username:docker}
	I0116 04:28:43.183572 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0116 04:28:43.183639 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0116 04:28:43.211980 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0116 04:28:43.212043 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 04:28:43.240486 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0116 04:28:43.240547 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 04:28:43.269812 2484801 provision.go:86] duration metric: configureAuth took 1.371397659s
	I0116 04:28:43.269839 2484801 ubuntu.go:193] setting minikube options for container-runtime
	I0116 04:28:43.270032 2484801 config.go:182] Loaded profile config "multinode-701570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:28:43.270134 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570-m02
	I0116 04:28:43.290531 2484801 main.go:141] libmachine: Using SSH client type: native
	I0116 04:28:43.290953 2484801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 35396 <nil> <nil>}
	I0116 04:28:43.290974 2484801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0116 04:28:43.547019 2484801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0116 04:28:43.547045 2484801 machine.go:91] provisioned docker machine in 2.027149507s
	I0116 04:28:43.547055 2484801 client.go:171] LocalClient.Create took 8.228991253s
	I0116 04:28:43.547070 2484801 start.go:167] duration metric: libmachine.API.Create for "multinode-701570" took 8.229037694s
	I0116 04:28:43.547092 2484801 start.go:300] post-start starting for "multinode-701570-m02" (driver="docker")
	I0116 04:28:43.547104 2484801 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 04:28:43.547168 2484801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 04:28:43.547210 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570-m02
	I0116 04:28:43.569992 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35396 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570-m02/id_rsa Username:docker}
	I0116 04:28:43.675887 2484801 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 04:28:43.679814 2484801 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0116 04:28:43.679833 2484801 command_runner.go:130] > NAME="Ubuntu"
	I0116 04:28:43.679840 2484801 command_runner.go:130] > VERSION_ID="22.04"
	I0116 04:28:43.679846 2484801 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0116 04:28:43.679853 2484801 command_runner.go:130] > VERSION_CODENAME=jammy
	I0116 04:28:43.679857 2484801 command_runner.go:130] > ID=ubuntu
	I0116 04:28:43.679866 2484801 command_runner.go:130] > ID_LIKE=debian
	I0116 04:28:43.679872 2484801 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0116 04:28:43.679878 2484801 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0116 04:28:43.679889 2484801 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0116 04:28:43.679898 2484801 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0116 04:28:43.679903 2484801 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0116 04:28:43.679943 2484801 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0116 04:28:43.679966 2484801 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0116 04:28:43.679977 2484801 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0116 04:28:43.679983 2484801 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0116 04:28:43.679993 2484801 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-2415678/.minikube/addons for local assets ...
	I0116 04:28:43.680049 2484801 filesync.go:126] Scanning /home/jenkins/minikube-integration/17965-2415678/.minikube/files for local assets ...
	I0116 04:28:43.680126 2484801 filesync.go:149] local asset: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem -> 24210052.pem in /etc/ssl/certs
	I0116 04:28:43.680132 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem -> /etc/ssl/certs/24210052.pem
	I0116 04:28:43.680228 2484801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 04:28:43.690589 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem --> /etc/ssl/certs/24210052.pem (1708 bytes)
	I0116 04:28:43.719543 2484801 start.go:303] post-start completed in 172.43486ms
	I0116 04:28:43.719906 2484801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701570-m02
	I0116 04:28:43.737859 2484801 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/config.json ...
	I0116 04:28:43.738145 2484801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 04:28:43.738198 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570-m02
	I0116 04:28:43.760317 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35396 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570-m02/id_rsa Username:docker}
	I0116 04:28:43.863018 2484801 command_runner.go:130] > 12%!
	(MISSING)I0116 04:28:43.863094 2484801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0116 04:28:43.868396 2484801 command_runner.go:130] > 171G
	I0116 04:28:43.868831 2484801 start.go:128] duration metric: createHost completed in 8.554308173s
	I0116 04:28:43.868848 2484801 start.go:83] releasing machines lock for "multinode-701570-m02", held for 8.554435785s
	I0116 04:28:43.868918 2484801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701570-m02
	I0116 04:28:43.891504 2484801 out.go:177] * Found network options:
	I0116 04:28:43.893464 2484801 out.go:177]   - NO_PROXY=192.168.58.2
	W0116 04:28:43.895438 2484801 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 04:28:43.895477 2484801 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 04:28:43.895546 2484801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0116 04:28:43.895593 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570-m02
	I0116 04:28:43.895903 2484801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 04:28:43.895959 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570-m02
	I0116 04:28:43.917740 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35396 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570-m02/id_rsa Username:docker}
	I0116 04:28:43.918791 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35396 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570-m02/id_rsa Username:docker}
	I0116 04:28:44.169983 2484801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 04:28:44.170110 2484801 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 04:28:44.175382 2484801 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0116 04:28:44.175449 2484801 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0116 04:28:44.175465 2484801 command_runner.go:130] > Device: b3h/179d	Inode: 1823289     Links: 1
	I0116 04:28:44.175475 2484801 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 04:28:44.175482 2484801 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0116 04:28:44.175490 2484801 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0116 04:28:44.175500 2484801 command_runner.go:130] > Change: 2024-01-16 04:06:01.188365486 +0000
	I0116 04:28:44.175506 2484801 command_runner.go:130] >  Birth: 2024-01-16 04:06:01.188365486 +0000
	I0116 04:28:44.175973 2484801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:28:44.200954 2484801 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0116 04:28:44.201029 2484801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 04:28:44.236020 2484801 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0116 04:28:44.236116 2484801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0116 04:28:44.236139 2484801 start.go:475] detecting cgroup driver to use...
	I0116 04:28:44.236203 2484801 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0116 04:28:44.236283 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 04:28:44.254685 2484801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 04:28:44.269405 2484801 docker.go:217] disabling cri-docker service (if available) ...
	I0116 04:28:44.269503 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0116 04:28:44.285793 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0116 04:28:44.302984 2484801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0116 04:28:44.407142 2484801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0116 04:28:44.524003 2484801 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0116 04:28:44.524060 2484801 docker.go:233] disabling docker service ...
	I0116 04:28:44.524140 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0116 04:28:44.549330 2484801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0116 04:28:44.563717 2484801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0116 04:28:44.651787 2484801 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0116 04:28:44.651863 2484801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0116 04:28:44.750369 2484801 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0116 04:28:44.750485 2484801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0116 04:28:44.765005 2484801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 04:28:44.783307 2484801 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0116 04:28:44.784767 2484801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0116 04:28:44.784829 2484801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:28:44.797086 2484801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0116 04:28:44.797156 2484801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:28:44.809060 2484801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:28:44.821055 2484801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0116 04:28:44.833345 2484801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 04:28:44.844881 2484801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 04:28:44.854438 2484801 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 04:28:44.855685 2484801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 04:28:44.866308 2484801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 04:28:44.957239 2484801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0116 04:28:45.089488 2484801 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0116 04:28:45.089641 2484801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0116 04:28:45.096083 2484801 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0116 04:28:45.096111 2484801 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 04:28:45.096120 2484801 command_runner.go:130] > Device: bch/188d	Inode: 186         Links: 1
	I0116 04:28:45.096129 2484801 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 04:28:45.096136 2484801 command_runner.go:130] > Access: 2024-01-16 04:28:45.069749508 +0000
	I0116 04:28:45.096143 2484801 command_runner.go:130] > Modify: 2024-01-16 04:28:45.069749508 +0000
	I0116 04:28:45.096151 2484801 command_runner.go:130] > Change: 2024-01-16 04:28:45.069749508 +0000
	I0116 04:28:45.096156 2484801 command_runner.go:130] >  Birth: -
	I0116 04:28:45.096304 2484801 start.go:543] Will wait 60s for crictl version
	I0116 04:28:45.096409 2484801 ssh_runner.go:195] Run: which crictl
	I0116 04:28:45.101996 2484801 command_runner.go:130] > /usr/bin/crictl
	I0116 04:28:45.102335 2484801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 04:28:45.163140 2484801 command_runner.go:130] > Version:  0.1.0
	I0116 04:28:45.163367 2484801 command_runner.go:130] > RuntimeName:  cri-o
	I0116 04:28:45.163486 2484801 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0116 04:28:45.163643 2484801 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 04:28:45.167445 2484801 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0116 04:28:45.167581 2484801 ssh_runner.go:195] Run: crio --version
	I0116 04:28:45.222722 2484801 command_runner.go:130] > crio version 1.24.6
	I0116 04:28:45.222807 2484801 command_runner.go:130] > Version:          1.24.6
	I0116 04:28:45.222834 2484801 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 04:28:45.222886 2484801 command_runner.go:130] > GitTreeState:     clean
	I0116 04:28:45.222915 2484801 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 04:28:45.222939 2484801 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 04:28:45.222970 2484801 command_runner.go:130] > Compiler:         gc
	I0116 04:28:45.222992 2484801 command_runner.go:130] > Platform:         linux/arm64
	I0116 04:28:45.223020 2484801 command_runner.go:130] > Linkmode:         dynamic
	I0116 04:28:45.223058 2484801 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 04:28:45.223078 2484801 command_runner.go:130] > SeccompEnabled:   true
	I0116 04:28:45.223101 2484801 command_runner.go:130] > AppArmorEnabled:  false
	I0116 04:28:45.225931 2484801 ssh_runner.go:195] Run: crio --version
	I0116 04:28:45.288042 2484801 command_runner.go:130] > crio version 1.24.6
	I0116 04:28:45.288124 2484801 command_runner.go:130] > Version:          1.24.6
	I0116 04:28:45.288156 2484801 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0116 04:28:45.288176 2484801 command_runner.go:130] > GitTreeState:     clean
	I0116 04:28:45.288199 2484801 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0116 04:28:45.288237 2484801 command_runner.go:130] > GoVersion:        go1.18.2
	I0116 04:28:45.288257 2484801 command_runner.go:130] > Compiler:         gc
	I0116 04:28:45.288280 2484801 command_runner.go:130] > Platform:         linux/arm64
	I0116 04:28:45.288316 2484801 command_runner.go:130] > Linkmode:         dynamic
	I0116 04:28:45.288340 2484801 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0116 04:28:45.288361 2484801 command_runner.go:130] > SeccompEnabled:   true
	I0116 04:28:45.288399 2484801 command_runner.go:130] > AppArmorEnabled:  false
	I0116 04:28:45.294424 2484801 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I0116 04:28:45.296417 2484801 out.go:177]   - env NO_PROXY=192.168.58.2
	I0116 04:28:45.298449 2484801 cli_runner.go:164] Run: docker network inspect multinode-701570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0116 04:28:45.317752 2484801 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0116 04:28:45.322525 2484801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:28:45.336218 2484801 certs.go:56] Setting up /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570 for IP: 192.168.58.3
	I0116 04:28:45.336251 2484801 certs.go:190] acquiring lock for shared ca certs: {Name:mkfc28b038850f5c4d343916ed6224daf2d0b70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:28:45.336383 2484801 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key
	I0116 04:28:45.336434 2484801 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key
	I0116 04:28:45.336448 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 04:28:45.336463 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0116 04:28:45.336478 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 04:28:45.336489 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 04:28:45.336545 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005.pem (1338 bytes)
	W0116 04:28:45.336578 2484801 certs.go:433] ignoring /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005_empty.pem, impossibly tiny 0 bytes
	I0116 04:28:45.336593 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca-key.pem (1675 bytes)
	I0116 04:28:45.336619 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/ca.pem (1078 bytes)
	I0116 04:28:45.336647 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/cert.pem (1123 bytes)
	I0116 04:28:45.336674 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/certs/key.pem (1679 bytes)
	I0116 04:28:45.336723 2484801 certs.go:437] found cert: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem (1708 bytes)
	I0116 04:28:45.336778 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:45.336796 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005.pem -> /usr/share/ca-certificates/2421005.pem
	I0116 04:28:45.336812 2484801 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem -> /usr/share/ca-certificates/24210052.pem
	I0116 04:28:45.337163 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 04:28:45.367160 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0116 04:28:45.397378 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 04:28:45.426619 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0116 04:28:45.455531 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 04:28:45.484877 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/certs/2421005.pem --> /usr/share/ca-certificates/2421005.pem (1338 bytes)
	I0116 04:28:45.515334 2484801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/ssl/certs/24210052.pem --> /usr/share/ca-certificates/24210052.pem (1708 bytes)
	I0116 04:28:45.545488 2484801 ssh_runner.go:195] Run: openssl version
	I0116 04:28:45.552152 2484801 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0116 04:28:45.552462 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2421005.pem && ln -fs /usr/share/ca-certificates/2421005.pem /etc/ssl/certs/2421005.pem"
	I0116 04:28:45.564256 2484801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2421005.pem
	I0116 04:28:45.569225 2484801 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 04:13 /usr/share/ca-certificates/2421005.pem
	I0116 04:28:45.569248 2484801 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 04:13 /usr/share/ca-certificates/2421005.pem
	I0116 04:28:45.569299 2484801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2421005.pem
	I0116 04:28:45.577597 2484801 command_runner.go:130] > 51391683
	I0116 04:28:45.578060 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2421005.pem /etc/ssl/certs/51391683.0"
	I0116 04:28:45.590229 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24210052.pem && ln -fs /usr/share/ca-certificates/24210052.pem /etc/ssl/certs/24210052.pem"
	I0116 04:28:45.602335 2484801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24210052.pem
	I0116 04:28:45.606986 2484801 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 04:13 /usr/share/ca-certificates/24210052.pem
	I0116 04:28:45.607099 2484801 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 04:13 /usr/share/ca-certificates/24210052.pem
	I0116 04:28:45.607158 2484801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24210052.pem
	I0116 04:28:45.615788 2484801 command_runner.go:130] > 3ec20f2e
	I0116 04:28:45.616188 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/24210052.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 04:28:45.628543 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 04:28:45.640742 2484801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:45.645575 2484801 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 04:06 /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:45.645668 2484801 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 04:06 /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:45.645726 2484801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 04:28:45.654346 2484801 command_runner.go:130] > b5213941
	I0116 04:28:45.654739 2484801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 04:28:45.666555 2484801 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 04:28:45.670776 2484801 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 04:28:45.671060 2484801 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 04:28:45.671189 2484801 ssh_runner.go:195] Run: crio config
	I0116 04:28:45.722299 2484801 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0116 04:28:45.722326 2484801 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0116 04:28:45.722335 2484801 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0116 04:28:45.722339 2484801 command_runner.go:130] > #
	I0116 04:28:45.722348 2484801 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0116 04:28:45.722356 2484801 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0116 04:28:45.722364 2484801 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0116 04:28:45.722376 2484801 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0116 04:28:45.722384 2484801 command_runner.go:130] > # reload'.
	I0116 04:28:45.722393 2484801 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0116 04:28:45.722404 2484801 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0116 04:28:45.722416 2484801 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0116 04:28:45.722428 2484801 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0116 04:28:45.722433 2484801 command_runner.go:130] > [crio]
	I0116 04:28:45.722447 2484801 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0116 04:28:45.722453 2484801 command_runner.go:130] > # containers images, in this directory.
	I0116 04:28:45.722465 2484801 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0116 04:28:45.722475 2484801 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0116 04:28:45.722635 2484801 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0116 04:28:45.722657 2484801 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0116 04:28:45.722669 2484801 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0116 04:28:45.722679 2484801 command_runner.go:130] > # storage_driver = "vfs"
	I0116 04:28:45.722686 2484801 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0116 04:28:45.722693 2484801 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0116 04:28:45.722699 2484801 command_runner.go:130] > # storage_option = [
	I0116 04:28:45.722961 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.722980 2484801 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0116 04:28:45.722988 2484801 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0116 04:28:45.722997 2484801 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0116 04:28:45.723004 2484801 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0116 04:28:45.723020 2484801 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0116 04:28:45.723032 2484801 command_runner.go:130] > # always happen on a node reboot
	I0116 04:28:45.723040 2484801 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0116 04:28:45.723051 2484801 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0116 04:28:45.723058 2484801 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0116 04:28:45.723072 2484801 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0116 04:28:45.723079 2484801 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0116 04:28:45.723089 2484801 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0116 04:28:45.723101 2484801 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0116 04:28:45.723108 2484801 command_runner.go:130] > # internal_wipe = true
	I0116 04:28:45.723116 2484801 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0116 04:28:45.723127 2484801 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0116 04:28:45.723135 2484801 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0116 04:28:45.723144 2484801 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0116 04:28:45.723152 2484801 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0116 04:28:45.723157 2484801 command_runner.go:130] > [crio.api]
	I0116 04:28:45.723164 2484801 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0116 04:28:45.723173 2484801 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0116 04:28:45.723180 2484801 command_runner.go:130] > # IP address on which the stream server will listen.
	I0116 04:28:45.723186 2484801 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0116 04:28:45.723196 2484801 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0116 04:28:45.723206 2484801 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0116 04:28:45.723211 2484801 command_runner.go:130] > # stream_port = "0"
	I0116 04:28:45.723221 2484801 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0116 04:28:45.723227 2484801 command_runner.go:130] > # stream_enable_tls = false
	I0116 04:28:45.723235 2484801 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0116 04:28:45.723244 2484801 command_runner.go:130] > # stream_idle_timeout = ""
	I0116 04:28:45.723252 2484801 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0116 04:28:45.723260 2484801 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0116 04:28:45.723268 2484801 command_runner.go:130] > # minutes.
	I0116 04:28:45.723273 2484801 command_runner.go:130] > # stream_tls_cert = ""
	I0116 04:28:45.723280 2484801 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0116 04:28:45.723288 2484801 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0116 04:28:45.723296 2484801 command_runner.go:130] > # stream_tls_key = ""
	I0116 04:28:45.723305 2484801 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0116 04:28:45.723314 2484801 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0116 04:28:45.723324 2484801 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0116 04:28:45.723330 2484801 command_runner.go:130] > # stream_tls_ca = ""
	I0116 04:28:45.723339 2484801 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 04:28:45.723348 2484801 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0116 04:28:45.723357 2484801 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0116 04:28:45.723366 2484801 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0116 04:28:45.723378 2484801 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0116 04:28:45.723388 2484801 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0116 04:28:45.723393 2484801 command_runner.go:130] > [crio.runtime]
	I0116 04:28:45.723402 2484801 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0116 04:28:45.723413 2484801 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0116 04:28:45.723418 2484801 command_runner.go:130] > # "nofile=1024:2048"
	I0116 04:28:45.723431 2484801 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0116 04:28:45.723436 2484801 command_runner.go:130] > # default_ulimits = [
	I0116 04:28:45.723446 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.723453 2484801 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0116 04:28:45.723458 2484801 command_runner.go:130] > # no_pivot = false
	I0116 04:28:45.723465 2484801 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0116 04:28:45.723475 2484801 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0116 04:28:45.723483 2484801 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0116 04:28:45.723493 2484801 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0116 04:28:45.723499 2484801 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0116 04:28:45.723511 2484801 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 04:28:45.723516 2484801 command_runner.go:130] > # conmon = ""
	I0116 04:28:45.723526 2484801 command_runner.go:130] > # Cgroup setting for conmon
	I0116 04:28:45.723534 2484801 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0116 04:28:45.723542 2484801 command_runner.go:130] > conmon_cgroup = "pod"
	I0116 04:28:45.723550 2484801 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0116 04:28:45.723557 2484801 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0116 04:28:45.723565 2484801 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0116 04:28:45.723572 2484801 command_runner.go:130] > # conmon_env = [
	I0116 04:28:45.723576 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.723583 2484801 command_runner.go:130] > # Additional environment variables to set for all the
	I0116 04:28:45.723594 2484801 command_runner.go:130] > # containers. These are overridden if set in the
	I0116 04:28:45.723615 2484801 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0116 04:28:45.723623 2484801 command_runner.go:130] > # default_env = [
	I0116 04:28:45.723628 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.723635 2484801 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0116 04:28:45.723817 2484801 command_runner.go:130] > # selinux = false
	I0116 04:28:45.723835 2484801 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0116 04:28:45.723844 2484801 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0116 04:28:45.723857 2484801 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0116 04:28:45.723862 2484801 command_runner.go:130] > # seccomp_profile = ""
	I0116 04:28:45.723870 2484801 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0116 04:28:45.723879 2484801 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0116 04:28:45.723887 2484801 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0116 04:28:45.723896 2484801 command_runner.go:130] > # which might increase security.
	I0116 04:28:45.723903 2484801 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0116 04:28:45.723916 2484801 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0116 04:28:45.723924 2484801 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0116 04:28:45.723932 2484801 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0116 04:28:45.723941 2484801 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0116 04:28:45.723948 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:28:45.723958 2484801 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0116 04:28:45.723965 2484801 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0116 04:28:45.723975 2484801 command_runner.go:130] > # the cgroup blockio controller.
	I0116 04:28:45.723980 2484801 command_runner.go:130] > # blockio_config_file = ""
	I0116 04:28:45.723989 2484801 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0116 04:28:45.724004 2484801 command_runner.go:130] > # irqbalance daemon.
	I0116 04:28:45.724011 2484801 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0116 04:28:45.724020 2484801 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0116 04:28:45.724028 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:28:45.724034 2484801 command_runner.go:130] > # rdt_config_file = ""
	I0116 04:28:45.724043 2484801 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0116 04:28:45.724052 2484801 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0116 04:28:45.724060 2484801 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0116 04:28:45.724069 2484801 command_runner.go:130] > # separate_pull_cgroup = ""
	I0116 04:28:45.724077 2484801 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0116 04:28:45.724085 2484801 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0116 04:28:45.724093 2484801 command_runner.go:130] > # will be added.
	I0116 04:28:45.724099 2484801 command_runner.go:130] > # default_capabilities = [
	I0116 04:28:45.724104 2484801 command_runner.go:130] > # 	"CHOWN",
	I0116 04:28:45.724109 2484801 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0116 04:28:45.724118 2484801 command_runner.go:130] > # 	"FSETID",
	I0116 04:28:45.724124 2484801 command_runner.go:130] > # 	"FOWNER",
	I0116 04:28:45.724132 2484801 command_runner.go:130] > # 	"SETGID",
	I0116 04:28:45.724314 2484801 command_runner.go:130] > # 	"SETUID",
	I0116 04:28:45.724328 2484801 command_runner.go:130] > # 	"SETPCAP",
	I0116 04:28:45.724345 2484801 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0116 04:28:45.724355 2484801 command_runner.go:130] > # 	"KILL",
	I0116 04:28:45.724359 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.724369 2484801 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0116 04:28:45.724380 2484801 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0116 04:28:45.724392 2484801 command_runner.go:130] > # add_inheritable_capabilities = true
	I0116 04:28:45.724400 2484801 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0116 04:28:45.724411 2484801 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 04:28:45.724416 2484801 command_runner.go:130] > # default_sysctls = [
	I0116 04:28:45.724420 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.724429 2484801 command_runner.go:130] > # List of devices on the host that a
	I0116 04:28:45.724437 2484801 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0116 04:28:45.724451 2484801 command_runner.go:130] > # allowed_devices = [
	I0116 04:28:45.724456 2484801 command_runner.go:130] > # 	"/dev/fuse",
	I0116 04:28:45.724465 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.724471 2484801 command_runner.go:130] > # List of additional devices. specified as
	I0116 04:28:45.724502 2484801 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0116 04:28:45.724512 2484801 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0116 04:28:45.724521 2484801 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0116 04:28:45.724526 2484801 command_runner.go:130] > # additional_devices = [
	I0116 04:28:45.724532 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.724539 2484801 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0116 04:28:45.724544 2484801 command_runner.go:130] > # cdi_spec_dirs = [
	I0116 04:28:45.724551 2484801 command_runner.go:130] > # 	"/etc/cdi",
	I0116 04:28:45.724561 2484801 command_runner.go:130] > # 	"/var/run/cdi",
	I0116 04:28:45.724565 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.724573 2484801 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0116 04:28:45.724584 2484801 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0116 04:28:45.724589 2484801 command_runner.go:130] > # Defaults to false.
	I0116 04:28:45.724595 2484801 command_runner.go:130] > # device_ownership_from_security_context = false
	I0116 04:28:45.724610 2484801 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0116 04:28:45.724617 2484801 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0116 04:28:45.724622 2484801 command_runner.go:130] > # hooks_dir = [
	I0116 04:28:45.724628 2484801 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0116 04:28:45.724634 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.724642 2484801 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0116 04:28:45.724653 2484801 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0116 04:28:45.724660 2484801 command_runner.go:130] > # its default mounts from the following two files:
	I0116 04:28:45.724667 2484801 command_runner.go:130] > #
	I0116 04:28:45.724677 2484801 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0116 04:28:45.724689 2484801 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0116 04:28:45.724696 2484801 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0116 04:28:45.724700 2484801 command_runner.go:130] > #
	I0116 04:28:45.724708 2484801 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0116 04:28:45.724718 2484801 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0116 04:28:45.724730 2484801 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0116 04:28:45.724740 2484801 command_runner.go:130] > #      only add mounts it finds in this file.
	I0116 04:28:45.724744 2484801 command_runner.go:130] > #
	I0116 04:28:45.724770 2484801 command_runner.go:130] > # default_mounts_file = ""
	I0116 04:28:45.724778 2484801 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0116 04:28:45.724789 2484801 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0116 04:28:45.724794 2484801 command_runner.go:130] > # pids_limit = 0
	I0116 04:28:45.724804 2484801 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0116 04:28:45.724817 2484801 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0116 04:28:45.724827 2484801 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0116 04:28:45.724840 2484801 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0116 04:28:45.724845 2484801 command_runner.go:130] > # log_size_max = -1
	I0116 04:28:45.724858 2484801 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0116 04:28:45.724864 2484801 command_runner.go:130] > # log_to_journald = false
	I0116 04:28:45.724874 2484801 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0116 04:28:45.724882 2484801 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0116 04:28:45.724889 2484801 command_runner.go:130] > # Path to directory for container attach sockets.
	I0116 04:28:45.725083 2484801 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0116 04:28:45.725099 2484801 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0116 04:28:45.725106 2484801 command_runner.go:130] > # bind_mount_prefix = ""
	I0116 04:28:45.725114 2484801 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0116 04:28:45.725121 2484801 command_runner.go:130] > # read_only = false
	I0116 04:28:45.725130 2484801 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0116 04:28:45.725141 2484801 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0116 04:28:45.725149 2484801 command_runner.go:130] > # live configuration reload.
	I0116 04:28:45.725158 2484801 command_runner.go:130] > # log_level = "info"
	I0116 04:28:45.725166 2484801 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0116 04:28:45.725180 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:28:45.725185 2484801 command_runner.go:130] > # log_filter = ""
	I0116 04:28:45.725193 2484801 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0116 04:28:45.725201 2484801 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0116 04:28:45.725213 2484801 command_runner.go:130] > # separated by comma.
	I0116 04:28:45.725218 2484801 command_runner.go:130] > # uid_mappings = ""
	I0116 04:28:45.725227 2484801 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0116 04:28:45.725238 2484801 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0116 04:28:45.725244 2484801 command_runner.go:130] > # separated by comma.
	I0116 04:28:45.725255 2484801 command_runner.go:130] > # gid_mappings = ""
	I0116 04:28:45.725266 2484801 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0116 04:28:45.725273 2484801 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 04:28:45.725285 2484801 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 04:28:45.725295 2484801 command_runner.go:130] > # minimum_mappable_uid = -1
	I0116 04:28:45.725303 2484801 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0116 04:28:45.725312 2484801 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0116 04:28:45.725324 2484801 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0116 04:28:45.725330 2484801 command_runner.go:130] > # minimum_mappable_gid = -1
	I0116 04:28:45.725338 2484801 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0116 04:28:45.725349 2484801 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0116 04:28:45.725357 2484801 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0116 04:28:45.725362 2484801 command_runner.go:130] > # ctr_stop_timeout = 30
	I0116 04:28:45.725370 2484801 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0116 04:28:45.725377 2484801 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0116 04:28:45.725388 2484801 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0116 04:28:45.725397 2484801 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0116 04:28:45.725405 2484801 command_runner.go:130] > # drop_infra_ctr = true
	I0116 04:28:45.725413 2484801 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0116 04:28:45.725424 2484801 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0116 04:28:45.725433 2484801 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0116 04:28:45.725440 2484801 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0116 04:28:45.725448 2484801 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0116 04:28:45.725458 2484801 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0116 04:28:45.725464 2484801 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0116 04:28:45.725477 2484801 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0116 04:28:45.725662 2484801 command_runner.go:130] > # pinns_path = ""
	I0116 04:28:45.725680 2484801 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0116 04:28:45.725688 2484801 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0116 04:28:45.725696 2484801 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0116 04:28:45.725702 2484801 command_runner.go:130] > # default_runtime = "runc"
	I0116 04:28:45.725708 2484801 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0116 04:28:45.725722 2484801 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0116 04:28:45.725735 2484801 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0116 04:28:45.725745 2484801 command_runner.go:130] > # creation as a file is not desired either.
	I0116 04:28:45.725755 2484801 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0116 04:28:45.725765 2484801 command_runner.go:130] > # the hostname is being managed dynamically.
	I0116 04:28:45.725772 2484801 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0116 04:28:45.725776 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.725784 2484801 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0116 04:28:45.725794 2484801 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0116 04:28:45.725805 2484801 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0116 04:28:45.725813 2484801 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0116 04:28:45.725821 2484801 command_runner.go:130] > #
	I0116 04:28:45.725832 2484801 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0116 04:28:45.725841 2484801 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0116 04:28:45.725846 2484801 command_runner.go:130] > #  runtime_type = "oci"
	I0116 04:28:45.725852 2484801 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0116 04:28:45.725858 2484801 command_runner.go:130] > #  privileged_without_host_devices = false
	I0116 04:28:45.725864 2484801 command_runner.go:130] > #  allowed_annotations = []
	I0116 04:28:45.725873 2484801 command_runner.go:130] > # Where:
	I0116 04:28:45.725880 2484801 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0116 04:28:45.725893 2484801 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0116 04:28:45.725901 2484801 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0116 04:28:45.725913 2484801 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0116 04:28:45.725918 2484801 command_runner.go:130] > #   in $PATH.
	I0116 04:28:45.725930 2484801 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0116 04:28:45.725936 2484801 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0116 04:28:45.725944 2484801 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0116 04:28:45.725948 2484801 command_runner.go:130] > #   state.
	I0116 04:28:45.725956 2484801 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0116 04:28:45.725967 2484801 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0116 04:28:45.725983 2484801 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0116 04:28:45.725995 2484801 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0116 04:28:45.726002 2484801 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0116 04:28:45.726014 2484801 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0116 04:28:45.726020 2484801 command_runner.go:130] > #   The currently recognized values are:
	I0116 04:28:45.726028 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0116 04:28:45.726037 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0116 04:28:45.726044 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0116 04:28:45.726055 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0116 04:28:45.726065 2484801 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0116 04:28:45.726076 2484801 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0116 04:28:45.726085 2484801 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0116 04:28:45.726096 2484801 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0116 04:28:45.726103 2484801 command_runner.go:130] > #   should be moved to the container's cgroup
	I0116 04:28:45.726108 2484801 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0116 04:28:45.726115 2484801 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0116 04:28:45.726120 2484801 command_runner.go:130] > runtime_type = "oci"
	I0116 04:28:45.726126 2484801 command_runner.go:130] > runtime_root = "/run/runc"
	I0116 04:28:45.726136 2484801 command_runner.go:130] > runtime_config_path = ""
	I0116 04:28:45.726142 2484801 command_runner.go:130] > monitor_path = ""
	I0116 04:28:45.726151 2484801 command_runner.go:130] > monitor_cgroup = ""
	I0116 04:28:45.726157 2484801 command_runner.go:130] > monitor_exec_cgroup = ""
	I0116 04:28:45.726186 2484801 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0116 04:28:45.726195 2484801 command_runner.go:130] > # running containers
	I0116 04:28:45.726201 2484801 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0116 04:28:45.726209 2484801 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0116 04:28:45.726227 2484801 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0116 04:28:45.726238 2484801 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0116 04:28:45.726245 2484801 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0116 04:28:45.726256 2484801 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0116 04:28:45.726262 2484801 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0116 04:28:45.726268 2484801 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0116 04:28:45.726274 2484801 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0116 04:28:45.726280 2484801 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0116 04:28:45.726292 2484801 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0116 04:28:45.726299 2484801 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0116 04:28:45.726314 2484801 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0116 04:28:45.726330 2484801 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0116 04:28:45.726340 2484801 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0116 04:28:45.726353 2484801 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0116 04:28:45.726364 2484801 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0116 04:28:45.726374 2484801 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0116 04:28:45.726385 2484801 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0116 04:28:45.726395 2484801 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0116 04:28:45.726403 2484801 command_runner.go:130] > # Example:
	I0116 04:28:45.726409 2484801 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0116 04:28:45.726415 2484801 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0116 04:28:45.726425 2484801 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0116 04:28:45.726432 2484801 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0116 04:28:45.726436 2484801 command_runner.go:130] > # cpuset = 0
	I0116 04:28:45.726441 2484801 command_runner.go:130] > # cpushares = "0-1"
	I0116 04:28:45.726446 2484801 command_runner.go:130] > # Where:
	I0116 04:28:45.726451 2484801 command_runner.go:130] > # The workload name is workload-type.
	I0116 04:28:45.726467 2484801 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0116 04:28:45.726477 2484801 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0116 04:28:45.726488 2484801 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0116 04:28:45.726498 2484801 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0116 04:28:45.726508 2484801 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0116 04:28:45.726513 2484801 command_runner.go:130] > # 
	I0116 04:28:45.726521 2484801 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0116 04:28:45.726525 2484801 command_runner.go:130] > #
	I0116 04:28:45.726533 2484801 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0116 04:28:45.726541 2484801 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0116 04:28:45.726552 2484801 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0116 04:28:45.726560 2484801 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0116 04:28:45.726571 2484801 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0116 04:28:45.726575 2484801 command_runner.go:130] > [crio.image]
	I0116 04:28:45.726588 2484801 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0116 04:28:45.726594 2484801 command_runner.go:130] > # default_transport = "docker://"
	I0116 04:28:45.726601 2484801 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0116 04:28:45.726609 2484801 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0116 04:28:45.726618 2484801 command_runner.go:130] > # global_auth_file = ""
	I0116 04:28:45.726626 2484801 command_runner.go:130] > # The image used to instantiate infra containers.
	I0116 04:28:45.726636 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:28:45.726643 2484801 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0116 04:28:45.726652 2484801 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0116 04:28:45.726662 2484801 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0116 04:28:45.726669 2484801 command_runner.go:130] > # This option supports live configuration reload.
	I0116 04:28:45.726674 2484801 command_runner.go:130] > # pause_image_auth_file = ""
	I0116 04:28:45.726681 2484801 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0116 04:28:45.726689 2484801 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0116 04:28:45.726700 2484801 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0116 04:28:45.726708 2484801 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0116 04:28:45.726717 2484801 command_runner.go:130] > # pause_command = "/pause"
	I0116 04:28:45.726724 2484801 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0116 04:28:45.726733 2484801 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0116 04:28:45.726744 2484801 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0116 04:28:45.726752 2484801 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0116 04:28:45.726758 2484801 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0116 04:28:45.726770 2484801 command_runner.go:130] > # signature_policy = ""
	I0116 04:28:45.726779 2484801 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0116 04:28:45.726790 2484801 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0116 04:28:45.726795 2484801 command_runner.go:130] > # changing them here.
	I0116 04:28:45.726800 2484801 command_runner.go:130] > # insecure_registries = [
	I0116 04:28:45.726805 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.726818 2484801 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0116 04:28:45.726825 2484801 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0116 04:28:45.727184 2484801 command_runner.go:130] > # image_volumes = "mkdir"
	I0116 04:28:45.727203 2484801 command_runner.go:130] > # Temporary directory to use for storing big files
	I0116 04:28:45.727209 2484801 command_runner.go:130] > # big_files_temporary_dir = ""
	I0116 04:28:45.727217 2484801 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0116 04:28:45.727222 2484801 command_runner.go:130] > # CNI plugins.
	I0116 04:28:45.727226 2484801 command_runner.go:130] > [crio.network]
	I0116 04:28:45.727234 2484801 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0116 04:28:45.727247 2484801 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0116 04:28:45.727252 2484801 command_runner.go:130] > # cni_default_network = ""
	I0116 04:28:45.727260 2484801 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0116 04:28:45.727268 2484801 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0116 04:28:45.727275 2484801 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0116 04:28:45.727283 2484801 command_runner.go:130] > # plugin_dirs = [
	I0116 04:28:45.727288 2484801 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0116 04:28:45.727292 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.727300 2484801 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0116 04:28:45.727305 2484801 command_runner.go:130] > [crio.metrics]
	I0116 04:28:45.727314 2484801 command_runner.go:130] > # Globally enable or disable metrics support.
	I0116 04:28:45.727319 2484801 command_runner.go:130] > # enable_metrics = false
	I0116 04:28:45.727328 2484801 command_runner.go:130] > # Specify enabled metrics collectors.
	I0116 04:28:45.727333 2484801 command_runner.go:130] > # Per default all metrics are enabled.
	I0116 04:28:45.727342 2484801 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0116 04:28:45.727353 2484801 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0116 04:28:45.727360 2484801 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0116 04:28:45.727369 2484801 command_runner.go:130] > # metrics_collectors = [
	I0116 04:28:45.727374 2484801 command_runner.go:130] > # 	"operations",
	I0116 04:28:45.727380 2484801 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0116 04:28:45.727386 2484801 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0116 04:28:45.727391 2484801 command_runner.go:130] > # 	"operations_errors",
	I0116 04:28:45.727396 2484801 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0116 04:28:45.727518 2484801 command_runner.go:130] > # 	"image_pulls_by_name",
	I0116 04:28:45.727532 2484801 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0116 04:28:45.727552 2484801 command_runner.go:130] > # 	"image_pulls_failures",
	I0116 04:28:45.727565 2484801 command_runner.go:130] > # 	"image_pulls_successes",
	I0116 04:28:45.727572 2484801 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0116 04:28:45.727577 2484801 command_runner.go:130] > # 	"image_layer_reuse",
	I0116 04:28:45.727585 2484801 command_runner.go:130] > # 	"containers_oom_total",
	I0116 04:28:45.727590 2484801 command_runner.go:130] > # 	"containers_oom",
	I0116 04:28:45.727595 2484801 command_runner.go:130] > # 	"processes_defunct",
	I0116 04:28:45.727601 2484801 command_runner.go:130] > # 	"operations_total",
	I0116 04:28:45.727606 2484801 command_runner.go:130] > # 	"operations_latency_seconds",
	I0116 04:28:45.727623 2484801 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0116 04:28:45.727633 2484801 command_runner.go:130] > # 	"operations_errors_total",
	I0116 04:28:45.727639 2484801 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0116 04:28:45.727645 2484801 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0116 04:28:45.727654 2484801 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0116 04:28:45.727660 2484801 command_runner.go:130] > # 	"image_pulls_success_total",
	I0116 04:28:45.727665 2484801 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0116 04:28:45.727671 2484801 command_runner.go:130] > # 	"containers_oom_count_total",
	I0116 04:28:45.727675 2484801 command_runner.go:130] > # ]
	I0116 04:28:45.727685 2484801 command_runner.go:130] > # The port on which the metrics server will listen.
	I0116 04:28:45.727690 2484801 command_runner.go:130] > # metrics_port = 9090
	I0116 04:28:45.727700 2484801 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0116 04:28:45.727705 2484801 command_runner.go:130] > # metrics_socket = ""
	I0116 04:28:45.727711 2484801 command_runner.go:130] > # The certificate for the secure metrics server.
	I0116 04:28:45.727720 2484801 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0116 04:28:45.727731 2484801 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0116 04:28:45.727736 2484801 command_runner.go:130] > # certificate on any modification event.
	I0116 04:28:45.727744 2484801 command_runner.go:130] > # metrics_cert = ""
	I0116 04:28:45.727755 2484801 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0116 04:28:45.727761 2484801 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0116 04:28:45.727769 2484801 command_runner.go:130] > # metrics_key = ""
	I0116 04:28:45.727777 2484801 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0116 04:28:45.727782 2484801 command_runner.go:130] > [crio.tracing]
	I0116 04:28:45.727792 2484801 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0116 04:28:45.727797 2484801 command_runner.go:130] > # enable_tracing = false
	I0116 04:28:45.727805 2484801 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0116 04:28:45.727813 2484801 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0116 04:28:45.727820 2484801 command_runner.go:130] > # Number of samples to collect per million spans.
	I0116 04:28:45.727826 2484801 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0116 04:28:45.727833 2484801 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0116 04:28:45.727838 2484801 command_runner.go:130] > [crio.stats]
	I0116 04:28:45.727848 2484801 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0116 04:28:45.727856 2484801 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0116 04:28:45.727864 2484801 command_runner.go:130] > # stats_collection_period = 0
	I0116 04:28:45.730131 2484801 command_runner.go:130] ! time="2024-01-16 04:28:45.719694308Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0116 04:28:45.730212 2484801 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0116 04:28:45.730313 2484801 cni.go:84] Creating CNI manager for ""
	I0116 04:28:45.730333 2484801 cni.go:136] 2 nodes found, recommending kindnet
	I0116 04:28:45.730343 2484801 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 04:28:45.730374 2484801 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-701570 NodeName:multinode-701570-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 04:28:45.730517 2484801 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-701570-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 04:28:45.730575 2484801 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-701570-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-701570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 04:28:45.730641 2484801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 04:28:45.740372 2484801 command_runner.go:130] > kubeadm
	I0116 04:28:45.740393 2484801 command_runner.go:130] > kubectl
	I0116 04:28:45.740399 2484801 command_runner.go:130] > kubelet
	I0116 04:28:45.741569 2484801 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 04:28:45.741695 2484801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 04:28:45.752425 2484801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0116 04:28:45.774573 2484801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 04:28:45.796421 2484801 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0116 04:28:45.800798 2484801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 04:28:45.814081 2484801 host.go:66] Checking if "multinode-701570" exists ...
	I0116 04:28:45.814344 2484801 start.go:304] JoinCluster: &{Name:multinode-701570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-701570 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:28:45.814432 2484801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 04:28:45.814482 2484801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:28:45.814834 2484801 config.go:182] Loaded profile config "multinode-701570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:28:45.833472 2484801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35391 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa Username:docker}
	I0116 04:28:46.007477 2484801 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token jvjjsh.irundv71y75byptn --discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad 
	I0116 04:28:46.007669 2484801 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 04:28:46.007741 2484801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jvjjsh.irundv71y75byptn --discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-701570-m02"
	I0116 04:28:46.054410 2484801 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 04:28:46.098057 2484801 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0116 04:28:46.098079 2484801 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I0116 04:28:46.098086 2484801 command_runner.go:130] > OS: Linux
	I0116 04:28:46.098093 2484801 command_runner.go:130] > CGROUPS_CPU: enabled
	I0116 04:28:46.098100 2484801 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0116 04:28:46.098108 2484801 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0116 04:28:46.098114 2484801 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0116 04:28:46.098120 2484801 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0116 04:28:46.098126 2484801 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0116 04:28:46.098133 2484801 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0116 04:28:46.098140 2484801 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0116 04:28:46.098146 2484801 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0116 04:28:46.211197 2484801 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 04:28:46.211228 2484801 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 04:28:46.239849 2484801 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 04:28:46.240050 2484801 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 04:28:46.240061 2484801 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 04:28:46.340095 2484801 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 04:28:49.359105 2484801 command_runner.go:130] > This node has joined the cluster:
	I0116 04:28:49.359132 2484801 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 04:28:49.359141 2484801 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 04:28:49.359149 2484801 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 04:28:49.362379 2484801 command_runner.go:130] ! W0116 04:28:46.054029    1023 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0116 04:28:49.362420 2484801 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0116 04:28:49.362435 2484801 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 04:28:49.362452 2484801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jvjjsh.irundv71y75byptn --discovery-token-ca-cert-hash sha256:c8e67ac96916dfae1995365a18c7132d078acd6bda510edb19f010658e1bfbad --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-701570-m02": (3.354684032s)
	I0116 04:28:49.362472 2484801 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 04:28:49.602437 2484801 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0116 04:28:49.602539 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578 minikube.k8s.io/name=multinode-701570 minikube.k8s.io/updated_at=2024_01_16T04_28_49_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 04:28:49.713293 2484801 command_runner.go:130] > node/multinode-701570-m02 labeled
	I0116 04:28:49.716897 2484801 start.go:306] JoinCluster complete in 3.902547279s
	I0116 04:28:49.716927 2484801 cni.go:84] Creating CNI manager for ""
	I0116 04:28:49.716934 2484801 cni.go:136] 2 nodes found, recommending kindnet
	I0116 04:28:49.716988 2484801 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 04:28:49.722098 2484801 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 04:28:49.722128 2484801 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I0116 04:28:49.722137 2484801 command_runner.go:130] > Device: 3ah/58d	Inode: 1827011     Links: 1
	I0116 04:28:49.722156 2484801 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 04:28:49.722163 2484801 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I0116 04:28:49.722175 2484801 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I0116 04:28:49.722181 2484801 command_runner.go:130] > Change: 2024-01-16 04:06:01.868348216 +0000
	I0116 04:28:49.722191 2484801 command_runner.go:130] >  Birth: 2024-01-16 04:06:01.824349333 +0000
	I0116 04:28:49.722283 2484801 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 04:28:49.722297 2484801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 04:28:49.743802 2484801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 04:28:50.199416 2484801 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 04:28:50.205305 2484801 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 04:28:50.210708 2484801 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 04:28:50.227757 2484801 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 04:28:50.234846 2484801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:28:50.235149 2484801 kapi.go:59] client config for multinode-701570: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.key", CAFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 04:28:50.235540 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 04:28:50.235550 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:50.235564 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:50.235571 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:50.239896 2484801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 04:28:50.239920 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:50.239929 2484801 round_trippers.go:580]     Audit-Id: 3036f52f-a3e7-4b3d-8dd4-596bc5639549
	I0116 04:28:50.239939 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:50.239946 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:50.239952 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:50.239963 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:50.239969 2484801 round_trippers.go:580]     Content-Length: 291
	I0116 04:28:50.239975 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:50 GMT
	I0116 04:28:50.239998 2484801 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0895626e-095c-45ca-93ec-399da9451bea","resourceVersion":"434","creationTimestamp":"2024-01-16T04:28:18Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 04:28:50.240177 2484801 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-701570" context rescaled to 1 replicas
	I0116 04:28:50.240207 2484801 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0116 04:28:50.242077 2484801 out.go:177] * Verifying Kubernetes components...
	I0116 04:28:50.243608 2484801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:28:50.259708 2484801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:28:50.259969 2484801 kapi.go:59] client config for multinode-701570: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.crt", KeyFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/multinode-701570/client.key", CAFile:"/home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 04:28:50.260249 2484801 node_ready.go:35] waiting up to 6m0s for node "multinode-701570-m02" to be "Ready" ...
	I0116 04:28:50.260326 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:50.260337 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:50.260346 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:50.260354 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:50.262970 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:50.262991 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:50.262999 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:50.263006 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:50 GMT
	I0116 04:28:50.263012 2484801 round_trippers.go:580]     Audit-Id: 19c955df-69ae-47ce-9756-cff39b624980
	I0116 04:28:50.263019 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:50.263025 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:50.263031 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:50.263169 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:50.760557 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:50.760598 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:50.760608 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:50.760621 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:50.763298 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:50.763318 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:50.763327 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:50.763334 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:50 GMT
	I0116 04:28:50.763341 2484801 round_trippers.go:580]     Audit-Id: cd232988-a2c1-404e-a0ab-4bc74b826851
	I0116 04:28:50.763347 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:50.763353 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:50.763360 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:50.763465 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:51.261089 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:51.261118 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:51.261128 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:51.261135 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:51.263621 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:51.263645 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:51.263653 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:51.263660 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:51.263666 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:51.263672 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:51 GMT
	I0116 04:28:51.263678 2484801 round_trippers.go:580]     Audit-Id: 41e3975d-9810-4076-851c-075ae49a4d11
	I0116 04:28:51.263691 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:51.263825 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:51.760966 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:51.760993 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:51.761002 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:51.761010 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:51.763834 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:51.763862 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:51.763871 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:51.763878 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:51 GMT
	I0116 04:28:51.763885 2484801 round_trippers.go:580]     Audit-Id: 78044751-edeb-4828-bce4-82f1c68062d1
	I0116 04:28:51.763891 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:51.763897 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:51.763908 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:51.764061 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:52.261242 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:52.261266 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:52.261276 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:52.261283 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:52.263809 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:52.263828 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:52.263837 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:52.263844 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:52.263850 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:52 GMT
	I0116 04:28:52.263856 2484801 round_trippers.go:580]     Audit-Id: 0df52c05-d3d7-417d-9a98-d51ee0eb6c3b
	I0116 04:28:52.263862 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:52.263868 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:52.264005 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:52.264388 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:28:52.761161 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:52.761190 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:52.761201 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:52.761210 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:52.764006 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:52.764035 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:52.764047 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:52.764056 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:52.764063 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:52.764070 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:52.764078 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:52 GMT
	I0116 04:28:52.764085 2484801 round_trippers.go:580]     Audit-Id: 661efe5b-4702-4520-9f8a-cc0581854847
	I0116 04:28:52.764348 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:53.261361 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:53.261389 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:53.261399 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:53.261407 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:53.264079 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:53.264168 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:53.264192 2484801 round_trippers.go:580]     Audit-Id: 4619611a-1401-44b0-9292-467d73f5b59f
	I0116 04:28:53.264215 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:53.264223 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:53.264230 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:53.264236 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:53.264269 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:53 GMT
	I0116 04:28:53.264407 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:53.760447 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:53.760472 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:53.760482 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:53.760490 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:53.763289 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:53.763369 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:53.763402 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:53.763439 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:53 GMT
	I0116 04:28:53.763473 2484801 round_trippers.go:580]     Audit-Id: 89ab344d-3e28-4d72-a2b9-d925512858f8
	I0116 04:28:53.763500 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:53.763522 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:53.763530 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:53.763681 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:54.260629 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:54.260655 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:54.260665 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:54.260672 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:54.263129 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:54.263148 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:54.263157 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:54 GMT
	I0116 04:28:54.263163 2484801 round_trippers.go:580]     Audit-Id: c8cc8f93-783e-45b2-a9a4-c7eaedf56538
	I0116 04:28:54.263172 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:54.263178 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:54.263184 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:54.263190 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:54.263337 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:54.761453 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:54.761497 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:54.761507 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:54.761520 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:54.764004 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:54.764032 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:54.764040 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:54 GMT
	I0116 04:28:54.764047 2484801 round_trippers.go:580]     Audit-Id: 66b57391-a682-4c88-92df-a61e79408f25
	I0116 04:28:54.764054 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:54.764061 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:54.764067 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:54.764073 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:54.764221 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:54.764668 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:28:55.261268 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:55.261293 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:55.261302 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:55.261317 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:55.263756 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:55.263779 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:55.263788 2484801 round_trippers.go:580]     Audit-Id: e2fabcaa-cbc7-4ce6-ba00-75863ffe1ec1
	I0116 04:28:55.263796 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:55.263802 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:55.263808 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:55.263817 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:55.263832 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:55 GMT
	I0116 04:28:55.264211 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:55.760843 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:55.760871 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:55.760883 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:55.760891 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:55.763259 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:55.763283 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:55.763292 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:55 GMT
	I0116 04:28:55.763298 2484801 round_trippers.go:580]     Audit-Id: 4e03da7c-d30a-44a4-b05d-2e89046437a6
	I0116 04:28:55.763320 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:55.763333 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:55.763340 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:55.763350 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:55.763533 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:56.261273 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:56.261301 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:56.261312 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:56.261319 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:56.263749 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:56.263768 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:56.263777 2484801 round_trippers.go:580]     Audit-Id: 6a929c27-7c51-488c-9f45-c9ca3a919c1c
	I0116 04:28:56.263783 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:56.263789 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:56.263795 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:56.263801 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:56.263807 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:56 GMT
	I0116 04:28:56.263944 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:56.760683 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:56.760706 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:56.760716 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:56.760723 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:56.763238 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:56.763267 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:56.763276 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:56.763294 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:56.763301 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:56 GMT
	I0116 04:28:56.763309 2484801 round_trippers.go:580]     Audit-Id: 5165315e-f68f-450f-93e5-9c04e20a1ec0
	I0116 04:28:56.763315 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:56.763335 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:56.763670 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:57.260709 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:57.260732 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:57.260743 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:57.260762 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:57.263115 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:57.263149 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:57.263157 2484801 round_trippers.go:580]     Audit-Id: 039cb530-57e3-4ec7-8834-7dc51e363d86
	I0116 04:28:57.263164 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:57.263183 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:57.263199 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:57.263205 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:57.263212 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:57 GMT
	I0116 04:28:57.263433 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:57.263823 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:28:57.760528 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:57.760555 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:57.760566 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:57.760573 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:57.763260 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:57.763285 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:57.763295 2484801 round_trippers.go:580]     Audit-Id: 62d6494e-e950-48ef-81f5-e47be2cd41d6
	I0116 04:28:57.763302 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:57.763308 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:57.763314 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:57.763321 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:57.763330 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:57 GMT
	I0116 04:28:57.763518 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:58.261310 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:58.261334 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:58.261344 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:58.261351 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:58.263933 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:58.263958 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:58.263968 2484801 round_trippers.go:580]     Audit-Id: 1da46ade-4c81-4df5-a40d-5a153006c055
	I0116 04:28:58.263975 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:58.263990 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:58.264000 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:58.264006 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:58.264013 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:58 GMT
	I0116 04:28:58.264153 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:58.761304 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:58.761330 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:58.761343 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:58.761351 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:58.763985 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:58.764010 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:58.764020 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:58.764027 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:58.764033 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:58.764040 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:58 GMT
	I0116 04:28:58.764046 2484801 round_trippers.go:580]     Audit-Id: 9025a964-d2c3-4702-8935-2b2ea14cd06e
	I0116 04:28:58.764053 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:58.764326 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"483","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I0116 04:28:59.260509 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:59.260532 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:59.260543 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:59.260550 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:59.262963 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:59.262984 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:59.262992 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:59.263000 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:59 GMT
	I0116 04:28:59.263006 2484801 round_trippers.go:580]     Audit-Id: 4dde5c52-39d8-4d8a-948b-981294680899
	I0116 04:28:59.263012 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:59.263018 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:59.263028 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:59.263346 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:28:59.760673 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:28:59.760699 2484801 round_trippers.go:469] Request Headers:
	I0116 04:28:59.760708 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:28:59.760716 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:28:59.763470 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:28:59.763491 2484801 round_trippers.go:577] Response Headers:
	I0116 04:28:59.763500 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:28:59.763506 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:28:59 GMT
	I0116 04:28:59.763513 2484801 round_trippers.go:580]     Audit-Id: 98efa237-2e05-4b62-9921-5f75a1f44c29
	I0116 04:28:59.763529 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:28:59.763535 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:28:59.763541 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:28:59.763677 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:28:59.764083 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:00.260527 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:00.260555 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:00.260566 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:00.260574 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:00.263157 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:00.263183 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:00.263192 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:00.263199 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:00.263205 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:00.263212 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:00 GMT
	I0116 04:29:00.263219 2484801 round_trippers.go:580]     Audit-Id: c9b3d74f-60e0-4613-9a6b-54d281743257
	I0116 04:29:00.263225 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:00.263378 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:00.760899 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:00.760926 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:00.760936 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:00.760943 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:00.763402 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:00.763433 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:00.763442 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:00.763449 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:00 GMT
	I0116 04:29:00.763470 2484801 round_trippers.go:580]     Audit-Id: 3b7d3062-2eac-4d75-af35-3106219325fc
	I0116 04:29:00.763481 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:00.763488 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:00.763501 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:00.763640 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:01.261340 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:01.261365 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:01.261375 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:01.261382 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:01.263889 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:01.263913 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:01.263921 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:01.263928 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:01.263934 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:01.263940 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:01.263948 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:01 GMT
	I0116 04:29:01.263954 2484801 round_trippers.go:580]     Audit-Id: 530f1d4d-1a09-47f1-9212-fb274be629f9
	I0116 04:29:01.264072 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:01.761232 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:01.761262 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:01.761272 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:01.761279 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:01.763748 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:01.763773 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:01.763782 2484801 round_trippers.go:580]     Audit-Id: b5928393-920f-40ab-b160-7f47a177bc4b
	I0116 04:29:01.763788 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:01.763795 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:01.763801 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:01.763807 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:01.763818 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:01 GMT
	I0116 04:29:01.763950 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:01.764347 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:02.260676 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:02.260701 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:02.260711 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:02.260719 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:02.263154 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:02.263178 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:02.263186 2484801 round_trippers.go:580]     Audit-Id: 56a4af5b-fac7-474a-a3c8-79ce0e4c2854
	I0116 04:29:02.263193 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:02.263200 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:02.263226 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:02.263240 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:02.263248 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:02 GMT
	I0116 04:29:02.263413 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:02.760844 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:02.760870 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:02.760880 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:02.760888 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:02.764361 2484801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 04:29:02.764387 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:02.764397 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:02 GMT
	I0116 04:29:02.764404 2484801 round_trippers.go:580]     Audit-Id: 5f2970be-473b-43b0-8d6b-fd5824557d80
	I0116 04:29:02.764410 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:02.764416 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:02.764422 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:02.764433 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:02.764542 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:03.261078 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:03.261109 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:03.261120 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:03.261127 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:03.263532 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:03.263579 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:03.263592 2484801 round_trippers.go:580]     Audit-Id: 8c7d54af-8b08-4719-80aa-eb445d8a72ff
	I0116 04:29:03.263599 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:03.263605 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:03.263611 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:03.263617 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:03.263623 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:03 GMT
	I0116 04:29:03.263736 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:03.761283 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:03.761311 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:03.761321 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:03.761328 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:03.763917 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:03.763939 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:03.763949 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:03.763956 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:03.763963 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:03.763969 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:03.763975 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:03 GMT
	I0116 04:29:03.763981 2484801 round_trippers.go:580]     Audit-Id: d1387ea4-712a-456e-beda-27a79a52e050
	I0116 04:29:03.764157 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:03.764596 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:04.260526 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:04.260552 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:04.260570 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:04.260591 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:04.263333 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:04.263362 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:04.263374 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:04.263381 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:04.263406 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:04.263417 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:04 GMT
	I0116 04:29:04.263428 2484801 round_trippers.go:580]     Audit-Id: 2cdf6adb-e354-427b-8f19-20fca23b79e4
	I0116 04:29:04.263453 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:04.263921 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:04.760566 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:04.760593 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:04.760605 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:04.760612 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:04.763081 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:04.763099 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:04.763107 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:04.763114 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:04 GMT
	I0116 04:29:04.763120 2484801 round_trippers.go:580]     Audit-Id: 62c5161e-1fb8-49db-b0d0-a6759b15e601
	I0116 04:29:04.763126 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:04.763132 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:04.763138 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:04.763383 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:05.260810 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:05.260854 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:05.260865 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:05.260872 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:05.263403 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:05.263427 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:05.263436 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:05.263443 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:05.263452 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:05.263459 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:05 GMT
	I0116 04:29:05.263465 2484801 round_trippers.go:580]     Audit-Id: a4b7c894-c2cf-450b-8eab-6bcf45295668
	I0116 04:29:05.263471 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:05.263707 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:05.760540 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:05.760565 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:05.760576 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:05.760583 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:05.763156 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:05.763178 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:05.763186 2484801 round_trippers.go:580]     Audit-Id: f1062695-d0c5-4f6b-ba31-4b7a1fd424fc
	I0116 04:29:05.763193 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:05.763199 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:05.763205 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:05.763211 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:05.763217 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:05 GMT
	I0116 04:29:05.763340 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:06.260466 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:06.260494 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:06.260505 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:06.260512 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:06.263007 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:06.263031 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:06.263039 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:06.263046 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:06.263052 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:06.263058 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:06 GMT
	I0116 04:29:06.263064 2484801 round_trippers.go:580]     Audit-Id: 5a0fb354-02da-44b0-b018-faae8f20d11d
	I0116 04:29:06.263070 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:06.263191 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:06.263600 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:06.761251 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:06.761286 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:06.761298 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:06.761311 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:06.763854 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:06.763874 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:06.763883 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:06.763890 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:06.763896 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:06.763903 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:06.763909 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:06 GMT
	I0116 04:29:06.763915 2484801 round_trippers.go:580]     Audit-Id: 9944f37b-0ea9-400e-bef4-664818f51edd
	I0116 04:29:06.764046 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:07.261252 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:07.261276 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:07.261287 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:07.261294 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:07.263735 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:07.263759 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:07.263768 2484801 round_trippers.go:580]     Audit-Id: 70b10177-bc82-47ec-956b-21189ec176b6
	I0116 04:29:07.263775 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:07.263785 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:07.263792 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:07.263802 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:07.263808 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:07 GMT
	I0116 04:29:07.263967 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:07.761162 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:07.761186 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:07.761196 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:07.761203 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:07.763739 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:07.763765 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:07.763775 2484801 round_trippers.go:580]     Audit-Id: 30bc68b9-c862-4e9b-8d6d-fa558819d421
	I0116 04:29:07.763784 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:07.763790 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:07.763796 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:07.763809 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:07.763816 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:07 GMT
	I0116 04:29:07.764061 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:08.261209 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:08.261233 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:08.261243 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:08.261251 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:08.263738 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:08.263764 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:08.263773 2484801 round_trippers.go:580]     Audit-Id: de0dcd3b-ddaa-40f5-ba12-6aafc22f7d22
	I0116 04:29:08.263780 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:08.263787 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:08.263793 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:08.263801 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:08.263808 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:08 GMT
	I0116 04:29:08.263982 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:08.264403 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:08.761164 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:08.761201 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:08.761212 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:08.761219 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:08.767105 2484801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 04:29:08.767131 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:08.767140 2484801 round_trippers.go:580]     Audit-Id: 7e54f861-2c29-4589-b472-1843749da164
	I0116 04:29:08.767146 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:08.767164 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:08.767170 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:08.767177 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:08.767183 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:08 GMT
	I0116 04:29:08.767376 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:09.260887 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:09.260914 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:09.260925 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:09.260932 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:09.263425 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:09.263497 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:09.263521 2484801 round_trippers.go:580]     Audit-Id: 18124fee-5ddf-4833-996e-954eeb308ce4
	I0116 04:29:09.263542 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:09.263609 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:09.263626 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:09.263634 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:09.263640 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:09 GMT
	I0116 04:29:09.263767 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:09.761347 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:09.761374 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:09.761384 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:09.761392 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:09.763811 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:09.763836 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:09.763846 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:09.763853 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:09 GMT
	I0116 04:29:09.763860 2484801 round_trippers.go:580]     Audit-Id: 02d9248c-f460-4319-9b91-908e4ddc6021
	I0116 04:29:09.763866 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:09.763872 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:09.763885 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:09.764207 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:10.260720 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:10.260788 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:10.260801 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:10.260818 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:10.263404 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:10.263429 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:10.263463 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:10.263478 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:10.263486 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:10.263493 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:10.263511 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:10 GMT
	I0116 04:29:10.263548 2484801 round_trippers.go:580]     Audit-Id: fb3eca4b-a7b9-44cd-abf9-997f1ad6a34a
	I0116 04:29:10.263804 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:10.760557 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:10.760583 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:10.760594 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:10.760601 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:10.763118 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:10.763144 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:10.763154 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:10.763161 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:10.763168 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:10.763175 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:10.763181 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:10 GMT
	I0116 04:29:10.763196 2484801 round_trippers.go:580]     Audit-Id: 467e37d1-88f3-4f23-92c8-fdab3f76e8ec
	I0116 04:29:10.763607 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:10.764040 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:11.261256 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:11.261281 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:11.261291 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:11.261300 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:11.263758 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:11.263779 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:11.263789 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:11.263796 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:11 GMT
	I0116 04:29:11.263802 2484801 round_trippers.go:580]     Audit-Id: c13d215c-caa2-41a9-aeb3-208856164adf
	I0116 04:29:11.263808 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:11.263815 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:11.263821 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:11.263975 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:11.761157 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:11.761183 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:11.761193 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:11.761200 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:11.763681 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:11.763701 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:11.763723 2484801 round_trippers.go:580]     Audit-Id: 83164952-ee7c-4082-be3b-1455753189dd
	I0116 04:29:11.763730 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:11.763736 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:11.763742 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:11.763748 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:11.763754 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:11 GMT
	I0116 04:29:11.763922 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:12.261087 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:12.261113 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:12.261124 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:12.261131 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:12.263776 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:12.263802 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:12.263810 2484801 round_trippers.go:580]     Audit-Id: d3023e32-b793-4927-9642-82c97d4ddb3f
	I0116 04:29:12.263826 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:12.263833 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:12.263839 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:12.263846 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:12.263853 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:12 GMT
	I0116 04:29:12.264217 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:12.761371 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:12.761394 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:12.761404 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:12.761417 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:12.763969 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:12.763989 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:12.763997 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:12.764004 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:12 GMT
	I0116 04:29:12.764010 2484801 round_trippers.go:580]     Audit-Id: bee60221-6f1b-483a-9a6c-6e927470d915
	I0116 04:29:12.764016 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:12.764022 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:12.764027 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:12.764274 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:12.764676 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:13.261337 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:13.261357 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:13.261367 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:13.261374 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:13.263839 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:13.263865 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:13.263875 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:13.263881 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:13 GMT
	I0116 04:29:13.263887 2484801 round_trippers.go:580]     Audit-Id: 216efd6a-b4e9-47e9-8d16-0d648d127552
	I0116 04:29:13.263896 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:13.263902 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:13.263908 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:13.264092 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:13.760931 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:13.760955 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:13.760967 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:13.760974 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:13.763519 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:13.763546 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:13.763555 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:13 GMT
	I0116 04:29:13.763562 2484801 round_trippers.go:580]     Audit-Id: 0f66969f-f0f2-46e6-a723-6c82c9742ac8
	I0116 04:29:13.763568 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:13.763574 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:13.763580 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:13.763587 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:13.763928 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:14.261320 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:14.261345 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:14.261356 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:14.261363 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:14.263908 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:14.263930 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:14.263938 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:14.263944 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:14.263951 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:14.263957 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:14 GMT
	I0116 04:29:14.263979 2484801 round_trippers.go:580]     Audit-Id: 90dab43c-33a0-4906-90d8-b0fba2cd8371
	I0116 04:29:14.263991 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:14.264181 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:14.760680 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:14.760708 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:14.760719 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:14.760726 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:14.763269 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:14.763338 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:14.763347 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:14.763354 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:14.763366 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:14.763374 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:14 GMT
	I0116 04:29:14.763380 2484801 round_trippers.go:580]     Audit-Id: e7ad167e-459c-4ae0-b29d-70967dced6ee
	I0116 04:29:14.763389 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:14.763507 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:15.261117 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:15.261143 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:15.261153 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:15.261160 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:15.263648 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:15.263683 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:15.263692 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:15.263699 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:15.263706 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:15 GMT
	I0116 04:29:15.263714 2484801 round_trippers.go:580]     Audit-Id: 138995b4-3dd6-4fa7-b2c4-c1ed5e18bae4
	I0116 04:29:15.263724 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:15.263730 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:15.263888 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:15.264302 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:15.761000 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:15.761025 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:15.761037 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:15.761045 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:15.763709 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:15.763760 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:15.763769 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:15.763776 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:15 GMT
	I0116 04:29:15.763782 2484801 round_trippers.go:580]     Audit-Id: 8f2a8d88-eb7e-48f7-ad07-00ffc40c13ac
	I0116 04:29:15.763788 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:15.763795 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:15.763809 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:15.763923 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:16.261403 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:16.261450 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:16.261460 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:16.261471 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:16.264021 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:16.264046 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:16.264056 2484801 round_trippers.go:580]     Audit-Id: 27a99404-45c5-418c-87cf-07c5c52bede2
	I0116 04:29:16.264062 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:16.264068 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:16.264083 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:16.264090 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:16.264097 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:16 GMT
	I0116 04:29:16.264255 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:16.761094 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:16.761118 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:16.761129 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:16.761137 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:16.763651 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:16.763676 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:16.763684 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:16.763691 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:16.763697 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:16.763704 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:16 GMT
	I0116 04:29:16.763710 2484801 round_trippers.go:580]     Audit-Id: 81c4929d-533f-4752-aabf-a9875e098f4d
	I0116 04:29:16.763732 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:16.763864 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:17.261435 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:17.261462 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:17.261471 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:17.261479 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:17.264059 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:17.264083 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:17.264093 2484801 round_trippers.go:580]     Audit-Id: 0789d7d5-1734-41fc-b893-d7f9b275ca91
	I0116 04:29:17.264100 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:17.264106 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:17.264112 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:17.264121 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:17.264128 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:17 GMT
	I0116 04:29:17.264363 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:17.264833 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:17.761218 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:17.761242 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:17.761251 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:17.761259 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:17.763722 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:17.763790 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:17.763814 2484801 round_trippers.go:580]     Audit-Id: d8f11958-b9fe-425f-8a51-176afbd86907
	I0116 04:29:17.763873 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:17.763887 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:17.763895 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:17.763913 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:17.763920 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:17 GMT
	I0116 04:29:17.764063 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:18.260504 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:18.260533 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:18.260543 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:18.260551 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:18.263134 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:18.263158 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:18.263168 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:18.263175 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:18 GMT
	I0116 04:29:18.263182 2484801 round_trippers.go:580]     Audit-Id: 714d26f0-35e2-41ae-94ea-36c1a2cc2f62
	I0116 04:29:18.263189 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:18.263201 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:18.263209 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:18.263593 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:18.761211 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:18.761232 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:18.761242 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:18.761249 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:18.763803 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:18.763823 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:18.763831 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:18.763838 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:18.763844 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:18 GMT
	I0116 04:29:18.763850 2484801 round_trippers.go:580]     Audit-Id: 33a1fee6-66e4-4ab7-98f6-1bc7a2032ef8
	I0116 04:29:18.763856 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:18.763862 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:18.763985 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:19.260530 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:19.260557 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:19.260566 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:19.260574 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:19.263191 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:19.263257 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:19.263271 2484801 round_trippers.go:580]     Audit-Id: 4da39f9b-724a-477b-8e33-608b0de32492
	I0116 04:29:19.263279 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:19.263287 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:19.263294 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:19.263300 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:19.263308 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:19 GMT
	I0116 04:29:19.263495 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:19.760901 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:19.760927 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:19.760937 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:19.760945 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:19.763477 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:19.763496 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:19.763505 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:19 GMT
	I0116 04:29:19.763512 2484801 round_trippers.go:580]     Audit-Id: ef805eaa-4356-41dd-9467-d6a9431e10b9
	I0116 04:29:19.763518 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:19.763524 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:19.763530 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:19.763536 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:19.763685 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:19.764083 2484801 node_ready.go:58] node "multinode-701570-m02" has status "Ready":"False"
	I0116 04:29:20.260970 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:20.260993 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.261004 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.261012 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.263472 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.263497 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.263505 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.263512 2484801 round_trippers.go:580]     Audit-Id: abc2a8a5-c945-4be0-b171-3970082e85a4
	I0116 04:29:20.263519 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.263525 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.263531 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.263538 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.263673 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"495","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 6113 chars]
	I0116 04:29:20.761005 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:20.761028 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.761038 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.761045 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.763554 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.763580 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.763589 2484801 round_trippers.go:580]     Audit-Id: e265906d-0f08-4521-9eff-c7f5772ef2ea
	I0116 04:29:20.763596 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.763602 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.763609 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.763617 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.763628 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.763904 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"518","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I0116 04:29:20.764312 2484801 node_ready.go:49] node "multinode-701570-m02" has status "Ready":"True"
	I0116 04:29:20.764331 2484801 node_ready.go:38] duration metric: took 30.504064938s waiting for node "multinode-701570-m02" to be "Ready" ...
	I0116 04:29:20.764341 2484801 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 04:29:20.764403 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0116 04:29:20.764413 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.764421 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.764428 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.767862 2484801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 04:29:20.767885 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.767893 2484801 round_trippers.go:580]     Audit-Id: c80954bf-6730-4fa1-9fe3-7b5edc209a0a
	I0116 04:29:20.767900 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.767906 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.767913 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.767922 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.767933 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.768584 2484801 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"518"},"items":[{"metadata":{"name":"coredns-5dd5756b68-hm6kd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0707ad3b-2557-49c2-bdc3-77554baac045","resourceVersion":"430","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"13e7ee15-d416-49ef-a50d-0f96dca51f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13e7ee15-d416-49ef-a50d-0f96dca51f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0116 04:29:20.771582 2484801 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hm6kd" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:20.771668 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-hm6kd
	I0116 04:29:20.771681 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.771690 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.771697 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.774144 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.774164 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.774172 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.774179 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.774185 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.774195 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.774201 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.774211 2484801 round_trippers.go:580]     Audit-Id: b2530b1b-1769-45af-931f-c58c7ad00e01
	I0116 04:29:20.774362 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-hm6kd","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0707ad3b-2557-49c2-bdc3-77554baac045","resourceVersion":"430","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"13e7ee15-d416-49ef-a50d-0f96dca51f4c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13e7ee15-d416-49ef-a50d-0f96dca51f4c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0116 04:29:20.774890 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:29:20.774908 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.774917 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.774924 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.777051 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.777070 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.777079 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.777085 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.777092 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.777101 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.777112 2484801 round_trippers.go:580]     Audit-Id: c5097d8d-e2b7-4c0f-9b1d-45cb5c756d4b
	I0116 04:29:20.777118 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.777373 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:29:20.777772 2484801 pod_ready.go:92] pod "coredns-5dd5756b68-hm6kd" in "kube-system" namespace has status "Ready":"True"
	I0116 04:29:20.777792 2484801 pod_ready.go:81] duration metric: took 6.185263ms waiting for pod "coredns-5dd5756b68-hm6kd" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:20.777803 2484801 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:20.777864 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-701570
	I0116 04:29:20.777875 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.777884 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.777896 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.780114 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.780139 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.780148 2484801 round_trippers.go:580]     Audit-Id: f05cc451-6cf9-433e-a4dd-ac76c619efe3
	I0116 04:29:20.780155 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.780187 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.780196 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.780206 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.780212 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.780538 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-701570","namespace":"kube-system","uid":"0a5cfa74-94f0-4823-a5a1-5958ed6b1bf0","resourceVersion":"300","creationTimestamp":"2024-01-16T04:28:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"cabe73156eb586d028a90186f6f018fa","kubernetes.io/config.mirror":"cabe73156eb586d028a90186f6f018fa","kubernetes.io/config.seen":"2024-01-16T04:28:09.585168419Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0116 04:29:20.781007 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:29:20.781026 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.781035 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.781042 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.783233 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.783252 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.783261 2484801 round_trippers.go:580]     Audit-Id: 9587a9bf-da08-42a8-b970-4bfcf47253c6
	I0116 04:29:20.783267 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.783274 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.783280 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.783286 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.783293 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.783463 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:29:20.783872 2484801 pod_ready.go:92] pod "etcd-multinode-701570" in "kube-system" namespace has status "Ready":"True"
	I0116 04:29:20.783885 2484801 pod_ready.go:81] duration metric: took 6.0737ms waiting for pod "etcd-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:20.783901 2484801 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:20.783962 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-701570
	I0116 04:29:20.783966 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.783974 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.783980 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.786168 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.786220 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.786248 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.786274 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.786310 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.786325 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.786332 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.786339 2484801 round_trippers.go:580]     Audit-Id: b72eef20-bab7-4379-a6aa-e612dd9e1dd2
	I0116 04:29:20.786494 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-701570","namespace":"kube-system","uid":"b9356c08-4daf-406f-a670-6a9b9e16f9f5","resourceVersion":"304","creationTimestamp":"2024-01-16T04:28:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"58157bca00bf98e9c1e982a9206a6678","kubernetes.io/config.mirror":"58157bca00bf98e9c1e982a9206a6678","kubernetes.io/config.seen":"2024-01-16T04:28:18.471369511Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0116 04:29:20.787084 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:29:20.787102 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.787112 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.787119 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.789388 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.789410 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.789418 2484801 round_trippers.go:580]     Audit-Id: 873c4e77-a1d0-47c1-9c42-4d0445835787
	I0116 04:29:20.789424 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.789431 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.789437 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.789447 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.789459 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.789630 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:29:20.790016 2484801 pod_ready.go:92] pod "kube-apiserver-multinode-701570" in "kube-system" namespace has status "Ready":"True"
	I0116 04:29:20.790033 2484801 pod_ready.go:81] duration metric: took 6.125974ms waiting for pod "kube-apiserver-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:20.790044 2484801 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:20.790105 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-701570
	I0116 04:29:20.790115 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.790123 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.790130 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.792537 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.792596 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.792621 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.792645 2484801 round_trippers.go:580]     Audit-Id: 1a3ee79e-f79f-4572-92b1-d6e2332e2da0
	I0116 04:29:20.792672 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.792679 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.792686 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.792692 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.792837 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-701570","namespace":"kube-system","uid":"99034f3b-f366-4321-9b3e-a956f134b849","resourceVersion":"306","creationTimestamp":"2024-01-16T04:28:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e732cbf0a05a16e89e295e6fc3da387d","kubernetes.io/config.mirror":"e732cbf0a05a16e89e295e6fc3da387d","kubernetes.io/config.seen":"2024-01-16T04:28:18.471370815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0116 04:29:20.793362 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:29:20.793379 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.793388 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.793400 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.795674 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.795765 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.795788 2484801 round_trippers.go:580]     Audit-Id: 7fc4cdfd-4a02-4a6f-8d24-d977747b0234
	I0116 04:29:20.795823 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.795849 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.795874 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.795908 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.795932 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.796052 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:29:20.796436 2484801 pod_ready.go:92] pod "kube-controller-manager-multinode-701570" in "kube-system" namespace has status "Ready":"True"
	I0116 04:29:20.796457 2484801 pod_ready.go:81] duration metric: took 6.402676ms waiting for pod "kube-controller-manager-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:20.796470 2484801 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vfpkz" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:20.961853 2484801 request.go:629] Waited for 165.297425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vfpkz
	I0116 04:29:20.961938 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vfpkz
	I0116 04:29:20.961950 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:20.961960 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:20.961967 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:20.964936 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:20.965089 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:20.965138 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:20.965189 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:20.965205 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:20 GMT
	I0116 04:29:20.965213 2484801 round_trippers.go:580]     Audit-Id: 55bfabd9-1e6f-4536-9182-142eb6e1697a
	I0116 04:29:20.965227 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:20.965234 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:20.965438 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vfpkz","generateName":"kube-proxy-","namespace":"kube-system","uid":"ddc18168-647c-461d-9bfd-1aa348eb6308","resourceVersion":"486","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bed87baa-dee4-463c-a56f-428fde34fcf2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bed87baa-dee4-463c-a56f-428fde34fcf2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0116 04:29:21.161342 2484801 request.go:629] Waited for 195.352367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:21.161426 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570-m02
	I0116 04:29:21.161433 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:21.161443 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:21.161451 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:21.164006 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:21.164040 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:21.164049 2484801 round_trippers.go:580]     Audit-Id: 1d0c424c-51ed-4369-b1fd-1a50f27bb58c
	I0116 04:29:21.164055 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:21.164061 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:21.164067 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:21.164073 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:21.164080 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:21 GMT
	I0116 04:29:21.164185 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570-m02","uid":"2c05e44a-3db7-4ba0-b0f0-cea55e3a62d4","resourceVersion":"518","creationTimestamp":"2024-01-16T04:28:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T04_28_49_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I0116 04:29:21.164595 2484801 pod_ready.go:92] pod "kube-proxy-vfpkz" in "kube-system" namespace has status "Ready":"True"
	I0116 04:29:21.164610 2484801 pod_ready.go:81] duration metric: took 368.128855ms waiting for pod "kube-proxy-vfpkz" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:21.164622 2484801 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zmnvg" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:21.361958 2484801 request.go:629] Waited for 197.271169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmnvg
	I0116 04:29:21.362078 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zmnvg
	I0116 04:29:21.362092 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:21.362103 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:21.362112 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:21.364882 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:21.364949 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:21.364974 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:21.364995 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:21.365020 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:21.365028 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:21.365034 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:21 GMT
	I0116 04:29:21.365040 2484801 round_trippers.go:580]     Audit-Id: bc49bf48-f307-47d0-8b25-b764e518acb9
	I0116 04:29:21.365170 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zmnvg","generateName":"kube-proxy-","namespace":"kube-system","uid":"49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1","resourceVersion":"408","creationTimestamp":"2024-01-16T04:28:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bed87baa-dee4-463c-a56f-428fde34fcf2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bed87baa-dee4-463c-a56f-428fde34fcf2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0116 04:29:21.561940 2484801 request.go:629] Waited for 196.260767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:29:21.562004 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:29:21.562020 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:21.562033 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:21.562041 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:21.564723 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:21.564774 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:21.564784 2484801 round_trippers.go:580]     Audit-Id: 43886aea-b266-4e74-a660-143886562fa0
	I0116 04:29:21.564790 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:21.564797 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:21.564803 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:21.564809 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:21.564815 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:21 GMT
	I0116 04:29:21.564938 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:29:21.565353 2484801 pod_ready.go:92] pod "kube-proxy-zmnvg" in "kube-system" namespace has status "Ready":"True"
	I0116 04:29:21.565375 2484801 pod_ready.go:81] duration metric: took 400.74566ms waiting for pod "kube-proxy-zmnvg" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:21.565387 2484801 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:21.761859 2484801 request.go:629] Waited for 196.395868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-701570
	I0116 04:29:21.761919 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-701570
	I0116 04:29:21.761929 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:21.761939 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:21.761950 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:21.764408 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:21.764464 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:21.764488 2484801 round_trippers.go:580]     Audit-Id: 97001169-581e-495d-831f-42b1a747838d
	I0116 04:29:21.764513 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:21.764551 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:21.764577 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:21.764601 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:21.764625 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:21 GMT
	I0116 04:29:21.764790 2484801 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-701570","namespace":"kube-system","uid":"60bf74e8-565d-49eb-98d9-7696c5cb222a","resourceVersion":"302","creationTimestamp":"2024-01-16T04:28:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8544d8b8e3b1e3a8d0d12fd2af1361e5","kubernetes.io/config.mirror":"8544d8b8e3b1e3a8d0d12fd2af1361e5","kubernetes.io/config.seen":"2024-01-16T04:28:18.471371824Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T04:28:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0116 04:29:21.961559 2484801 request.go:629] Waited for 196.301824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:29:21.961623 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-701570
	I0116 04:29:21.961632 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:21.961641 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:21.961651 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:21.964166 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:21.964191 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:21.964200 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:21.964206 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:21.964213 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:21 GMT
	I0116 04:29:21.964219 2484801 round_trippers.go:580]     Audit-Id: cdc600cc-6c12-4f54-a6a9-2c376ee565c9
	I0116 04:29:21.964226 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:21.964232 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:21.964334 2484801 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-16T04:28:15Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0116 04:29:21.964736 2484801 pod_ready.go:92] pod "kube-scheduler-multinode-701570" in "kube-system" namespace has status "Ready":"True"
	I0116 04:29:21.964780 2484801 pod_ready.go:81] duration metric: took 399.379861ms waiting for pod "kube-scheduler-multinode-701570" in "kube-system" namespace to be "Ready" ...
	I0116 04:29:21.964798 2484801 pod_ready.go:38] duration metric: took 1.200444451s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 04:29:21.964812 2484801 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 04:29:21.964878 2484801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:29:21.978655 2484801 system_svc.go:56] duration metric: took 13.832116ms WaitForService to wait for kubelet.
	I0116 04:29:21.978693 2484801 kubeadm.go:581] duration metric: took 31.73846394s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 04:29:21.978716 2484801 node_conditions.go:102] verifying NodePressure condition ...
	I0116 04:29:22.161066 2484801 request.go:629] Waited for 182.265339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0116 04:29:22.161149 2484801 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0116 04:29:22.161159 2484801 round_trippers.go:469] Request Headers:
	I0116 04:29:22.161169 2484801 round_trippers.go:473]     Accept: application/json, */*
	I0116 04:29:22.161183 2484801 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0116 04:29:22.163974 2484801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 04:29:22.163997 2484801 round_trippers.go:577] Response Headers:
	I0116 04:29:22.164005 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 433ee29e-b34f-41d9-bde1-535f6df85e3c
	I0116 04:29:22.164012 2484801 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5601afb9-47cc-40f9-a470-ec567d22719e
	I0116 04:29:22.164019 2484801 round_trippers.go:580]     Date: Tue, 16 Jan 2024 04:29:22 GMT
	I0116 04:29:22.164026 2484801 round_trippers.go:580]     Audit-Id: e7c73006-45fb-4eb7-8d10-c6120420bfe4
	I0116 04:29:22.164037 2484801 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 04:29:22.164046 2484801 round_trippers.go:580]     Content-Type: application/json
	I0116 04:29:22.164218 2484801 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"519"},"items":[{"metadata":{"name":"multinode-701570","uid":"966e9bfd-0814-4772-920d-6bdadae6d98d","resourceVersion":"414","creationTimestamp":"2024-01-16T04:28:15Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-701570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6e8fa5f64d0e7272be43ff25ed3826261f0a2578","minikube.k8s.io/name":"multinode-701570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T04_28_19_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 13004 chars]
	I0116 04:29:22.164925 2484801 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 04:29:22.164947 2484801 node_conditions.go:123] node cpu capacity is 2
	I0116 04:29:22.164958 2484801 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0116 04:29:22.164963 2484801 node_conditions.go:123] node cpu capacity is 2
	I0116 04:29:22.164968 2484801 node_conditions.go:105] duration metric: took 186.24708ms to run NodePressure ...
	I0116 04:29:22.164983 2484801 start.go:228] waiting for startup goroutines ...
	I0116 04:29:22.165012 2484801 start.go:242] writing updated cluster config ...
	I0116 04:29:22.165324 2484801 ssh_runner.go:195] Run: rm -f paused
	I0116 04:29:22.227628 2484801 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 04:29:22.231234 2484801 out.go:177] * Done! kubectl is now configured to use "multinode-701570" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 16 04:28:33 multinode-701570 crio[905]: time="2024-01-16 04:28:33.321556322Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a66c093ff6a717c338ce4bce9e8f4edbd6bd7f78c4991263e7eef89f1294db0b/merged/etc/passwd: no such file or directory"
	Jan 16 04:28:33 multinode-701570 crio[905]: time="2024-01-16 04:28:33.321601417Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a66c093ff6a717c338ce4bce9e8f4edbd6bd7f78c4991263e7eef89f1294db0b/merged/etc/group: no such file or directory"
	Jan 16 04:28:33 multinode-701570 crio[905]: time="2024-01-16 04:28:33.362634779Z" level=info msg="Created container d7f7f79cec22cba11d39f0c5c00f0c726fb860199d0837506b3ac7632afcb32f: kube-system/storage-provisioner/storage-provisioner" id=edb95a62-cc8d-4a35-bee2-bb20327494ec name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 04:28:33 multinode-701570 crio[905]: time="2024-01-16 04:28:33.363372902Z" level=info msg="Starting container: d7f7f79cec22cba11d39f0c5c00f0c726fb860199d0837506b3ac7632afcb32f" id=87b07f77-ed4f-4483-a889-d7ba61bf5366 name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 04:28:33 multinode-701570 crio[905]: time="2024-01-16 04:28:33.372919499Z" level=info msg="Started container" PID=1951 containerID=d7f7f79cec22cba11d39f0c5c00f0c726fb860199d0837506b3ac7632afcb32f description=kube-system/storage-provisioner/storage-provisioner id=87b07f77-ed4f-4483-a889-d7ba61bf5366 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b1d0c8720b7fdc2f50f97fc0a4803db702605aed4d9d9076b88efc5f85d748c
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.473846188Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-x6w9z/POD" id=ee66e5f4-e633-4214-b0e6-a43b5a7af78d name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.473912730Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.495026705Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-x6w9z Namespace:default ID:e6ac718f7c2d5dba95e17b7ade93629991e25bc442d306d8d50d267c3db32733 UID:abcd8745-b810-4e14-bb12-464bc549bcf9 NetNS:/var/run/netns/eacfb9c0-cf46-42b8-a1e6-0b860f0084ed Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.495061847Z" level=info msg="Adding pod default_busybox-5bc68d56bd-x6w9z to CNI network \"kindnet\" (type=ptp)"
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.507194675Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-x6w9z Namespace:default ID:e6ac718f7c2d5dba95e17b7ade93629991e25bc442d306d8d50d267c3db32733 UID:abcd8745-b810-4e14-bb12-464bc549bcf9 NetNS:/var/run/netns/eacfb9c0-cf46-42b8-a1e6-0b860f0084ed Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.507345875Z" level=info msg="Checking pod default_busybox-5bc68d56bd-x6w9z for CNI network kindnet (type=ptp)"
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.512451444Z" level=info msg="Ran pod sandbox e6ac718f7c2d5dba95e17b7ade93629991e25bc442d306d8d50d267c3db32733 with infra container: default/busybox-5bc68d56bd-x6w9z/POD" id=ee66e5f4-e633-4214-b0e6-a43b5a7af78d name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.513714821Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=13a16cea-cb3f-4782-8bdb-5042941c854c name=/runtime.v1.ImageService/ImageStatus
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.513929273Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=13a16cea-cb3f-4782-8bdb-5042941c854c name=/runtime.v1.ImageService/ImageStatus
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.516506980Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=e4dde0a1-d100-4f42-8969-aedf933a89a9 name=/runtime.v1.ImageService/PullImage
	Jan 16 04:29:23 multinode-701570 crio[905]: time="2024-01-16 04:29:23.518005510Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 16 04:29:24 multinode-701570 crio[905]: time="2024-01-16 04:29:24.109938867Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jan 16 04:29:25 multinode-701570 crio[905]: time="2024-01-16 04:29:25.253188750Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=e4dde0a1-d100-4f42-8969-aedf933a89a9 name=/runtime.v1.ImageService/PullImage
	Jan 16 04:29:25 multinode-701570 crio[905]: time="2024-01-16 04:29:25.254290293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1779aff7-8277-4769-9125-17bb41f10bef name=/runtime.v1.ImageService/ImageStatus
	Jan 16 04:29:25 multinode-701570 crio[905]: time="2024-01-16 04:29:25.255042438Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1779aff7-8277-4769-9125-17bb41f10bef name=/runtime.v1.ImageService/ImageStatus
	Jan 16 04:29:25 multinode-701570 crio[905]: time="2024-01-16 04:29:25.255939342Z" level=info msg="Creating container: default/busybox-5bc68d56bd-x6w9z/busybox" id=5056163f-4445-4b18-b7a9-3ae331cdc7f7 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 04:29:25 multinode-701570 crio[905]: time="2024-01-16 04:29:25.256107724Z" level=warning msg="Allowed annotations are specified for workload []"
	Jan 16 04:29:25 multinode-701570 crio[905]: time="2024-01-16 04:29:25.316068138Z" level=info msg="Created container b74157839f857ab1a1fb08966306d4adc9f4ec1d494a44c2c24c90d08666f1c3: default/busybox-5bc68d56bd-x6w9z/busybox" id=5056163f-4445-4b18-b7a9-3ae331cdc7f7 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 16 04:29:25 multinode-701570 crio[905]: time="2024-01-16 04:29:25.316780473Z" level=info msg="Starting container: b74157839f857ab1a1fb08966306d4adc9f4ec1d494a44c2c24c90d08666f1c3" id=5fb06c47-88b8-407d-a196-252176c7cbec name=/runtime.v1.RuntimeService/StartContainer
	Jan 16 04:29:25 multinode-701570 crio[905]: time="2024-01-16 04:29:25.325237243Z" level=info msg="Started container" PID=2067 containerID=b74157839f857ab1a1fb08966306d4adc9f4ec1d494a44c2c24c90d08666f1c3 description=default/busybox-5bc68d56bd-x6w9z/busybox id=5fb06c47-88b8-407d-a196-252176c7cbec name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6ac718f7c2d5dba95e17b7ade93629991e25bc442d306d8d50d267c3db32733
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b74157839f857       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   4 seconds ago        Running             busybox                   0                   e6ac718f7c2d5       busybox-5bc68d56bd-x6w9z
	d7f7f79cec22c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      56 seconds ago       Running             storage-provisioner       0                   9b1d0c8720b7f       storage-provisioner
	d07d943ff9b82       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      57 seconds ago       Running             coredns                   0                   5203a70157502       coredns-5dd5756b68-hm6kd
	0d4a90183d4e6       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      58 seconds ago       Running             kube-proxy                0                   e60e20dcc615c       kube-proxy-zmnvg
	108e20499eca1       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      58 seconds ago       Running             kindnet-cni               0                   5ea1fb8e41bb3       kindnet-xkvsh
	3a3730f1746ca       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   bc177624fe247       kube-scheduler-multinode-701570
	10549e6a99528       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   9a8bcba337239       kube-controller-manager-multinode-701570
	ad52483011249       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   dc328ad0ae31e       etcd-multinode-701570
	00c13145cb3f3       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   c58d187cf49fb       kube-apiserver-multinode-701570
	
	
	==> coredns [d07d943ff9b8223f93487947cc7e89abb73cf7da9ce5eb427cc39b8e43a6dd9b] <==
	[INFO] 10.244.1.2:53804 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123853s
	[INFO] 10.244.0.3:38033 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101765s
	[INFO] 10.244.0.3:48392 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001122277s
	[INFO] 10.244.0.3:33841 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127307s
	[INFO] 10.244.0.3:56703 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067149s
	[INFO] 10.244.0.3:42374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000899267s
	[INFO] 10.244.0.3:49710 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068248s
	[INFO] 10.244.0.3:55434 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065204s
	[INFO] 10.244.0.3:58632 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000259128s
	[INFO] 10.244.1.2:50897 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117963s
	[INFO] 10.244.1.2:38745 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084347s
	[INFO] 10.244.1.2:56846 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073221s
	[INFO] 10.244.1.2:44487 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074131s
	[INFO] 10.244.0.3:46604 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084297s
	[INFO] 10.244.0.3:43724 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049484s
	[INFO] 10.244.0.3:58823 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005311s
	[INFO] 10.244.0.3:60432 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048663s
	[INFO] 10.244.1.2:58375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119349s
	[INFO] 10.244.1.2:48377 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179713s
	[INFO] 10.244.1.2:39754 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109667s
	[INFO] 10.244.1.2:56525 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104498s
	[INFO] 10.244.0.3:54700 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092535s
	[INFO] 10.244.0.3:60660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050764s
	[INFO] 10.244.0.3:48353 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000063507s
	[INFO] 10.244.0.3:35683 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064285s
	
	
	==> describe nodes <==
	Name:               multinode-701570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-701570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=multinode-701570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T04_28_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 04:28:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-701570
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 04:29:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 04:28:32 +0000   Tue, 16 Jan 2024 04:28:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 04:28:32 +0000   Tue, 16 Jan 2024 04:28:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 04:28:32 +0000   Tue, 16 Jan 2024 04:28:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 04:28:32 +0000   Tue, 16 Jan 2024 04:28:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-701570
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 8176dd7dc8834742a8feb70d9f7b1412
	  System UUID:                0a2102dc-a019-4b03-a892-502d6de68f0c
	  Boot ID:                    3a165b82-f13d-4880-a2c5-3d4f8ff28eca
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-x6w9z                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-hm6kd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     59s
	  kube-system                 etcd-multinode-701570                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kindnet-xkvsh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      60s
	  kube-system                 kube-apiserver-multinode-701570             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-multinode-701570    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-zmnvg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-multinode-701570             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node multinode-701570 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node multinode-701570 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s (x8 over 81s)  kubelet          Node multinode-701570 status is now: NodeHasSufficientPID
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node multinode-701570 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node multinode-701570 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet          Node multinode-701570 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           60s                node-controller  Node multinode-701570 event: Registered Node multinode-701570 in Controller
	  Normal  NodeReady                58s                kubelet          Node multinode-701570 status is now: NodeReady
	
	
	Name:               multinode-701570-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-701570-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e8fa5f64d0e7272be43ff25ed3826261f0a2578
	                    minikube.k8s.io/name=multinode-701570
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T04_28_49_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 04:28:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-701570-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 04:29:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 04:29:20 +0000   Tue, 16 Jan 2024 04:28:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 04:29:20 +0000   Tue, 16 Jan 2024 04:28:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 04:29:20 +0000   Tue, 16 Jan 2024 04:28:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 04:29:20 +0000   Tue, 16 Jan 2024 04:29:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-701570-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae65da24c0fe4c54a90d208d1812d3e0
	  System UUID:                56f18438-168c-487b-adfb-f7bfe2c64f5c
	  Boot ID:                    3a165b82-f13d-4880-a2c5-3d4f8ff28eca
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-v42wl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-g4kbq               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      41s
	  kube-system                 kube-proxy-vfpkz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  NodeHasSufficientMemory  41s (x5 over 43s)  kubelet          Node multinode-701570-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x5 over 43s)  kubelet          Node multinode-701570-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x5 over 43s)  kubelet          Node multinode-701570-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node multinode-701570-m02 event: Registered Node multinode-701570-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-701570-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001333] FS-Cache: O-key=[8] 'eb693b0000000000'
	[  +0.000818] FS-Cache: N-cookie c=0000009c [p=00000093 fl=2 nc=0 na=1]
	[  +0.001133] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=0000000029c1254e
	[  +0.001369] FS-Cache: N-key=[8] 'eb693b0000000000'
	[  +0.005424] FS-Cache: Duplicate cookie detected
	[  +0.000784] FS-Cache: O-cookie c=00000096 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001138] FS-Cache: O-cookie d=00000000b2a3e576{9p.inode} n=00000000c01c346d
	[  +0.001178] FS-Cache: O-key=[8] 'eb693b0000000000'
	[  +0.000799] FS-Cache: N-cookie c=0000009d [p=00000093 fl=2 nc=0 na=1]
	[  +0.001064] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=00000000fbbfa844
	[  +0.001199] FS-Cache: N-key=[8] 'eb693b0000000000'
	[  +2.236228] FS-Cache: Duplicate cookie detected
	[  +0.001025] FS-Cache: O-cookie c=00000094 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001453] FS-Cache: O-cookie d=00000000b2a3e576{9p.inode} n=000000005b84793c
	[  +0.001281] FS-Cache: O-key=[8] 'ea693b0000000000'
	[  +0.000853] FS-Cache: N-cookie c=0000009f [p=00000093 fl=2 nc=0 na=1]
	[  +0.001156] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=000000001c0ef5b4
	[  +0.001206] FS-Cache: N-key=[8] 'ea693b0000000000'
	[  +0.506033] FS-Cache: Duplicate cookie detected
	[  +0.000873] FS-Cache: O-cookie c=00000099 [p=00000093 fl=226 nc=0 na=1]
	[  +0.001071] FS-Cache: O-cookie d=00000000b2a3e576{9p.inode} n=000000006abd7985
	[  +0.001299] FS-Cache: O-key=[8] 'f0693b0000000000'
	[  +0.000818] FS-Cache: N-cookie c=000000a0 [p=00000093 fl=2 nc=0 na=1]
	[  +0.001078] FS-Cache: N-cookie d=00000000b2a3e576{9p.inode} n=000000001298e7f4
	[  +0.001270] FS-Cache: N-key=[8] 'f0693b0000000000'
	
	
	==> etcd [ad52483011249caf314e8872a36f731cbc3faf8bd8a97bdbf00812e045d782a0] <==
	{"level":"info","ts":"2024-01-16T04:28:10.544977Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T04:28:10.545006Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T04:28:10.545016Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-16T04:28:10.545504Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-16T04:28:10.545528Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2024-01-16T04:28:10.545977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2024-01-16T04:28:10.546089Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2024-01-16T04:28:11.476784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T04:28:11.476912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T04:28:11.476964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2024-01-16T04:28:11.477002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T04:28:11.477035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-16T04:28:11.477071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T04:28:11.477104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-01-16T04:28:11.480922Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-701570 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T04:28:11.484909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T04:28:11.485979Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-01-16T04:28:11.486147Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T04:28:11.486316Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T04:28:11.48747Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T04:28:11.487596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T04:28:11.487646Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T04:28:11.488204Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T04:28:11.488315Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T04:28:11.488944Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 04:29:30 up 11:11,  0 users,  load average: 1.46, 1.89, 1.98
	Linux multinode-701570 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [108e20499eca181b146b03a62979540112e638e75fcc6f57e8c61bffc3ce3214] <==
	podIP = 192.168.58.2
	I0116 04:28:32.014169       1 main.go:116] setting mtu 1500 for CNI 
	I0116 04:28:32.014179       1 main.go:146] kindnetd IP family: "ipv4"
	I0116 04:28:32.014190       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0116 04:28:32.399004       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 04:28:32.399125       1 main.go:227] handling current node
	I0116 04:28:42.501133       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 04:28:42.501237       1 main.go:227] handling current node
	I0116 04:28:52.512679       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 04:28:52.512715       1 main.go:227] handling current node
	I0116 04:28:52.512729       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 04:28:52.512735       1 main.go:250] Node multinode-701570-m02 has CIDR [10.244.1.0/24] 
	I0116 04:28:52.512959       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0116 04:29:02.525808       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 04:29:02.525840       1 main.go:227] handling current node
	I0116 04:29:02.525851       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 04:29:02.525857       1 main.go:250] Node multinode-701570-m02 has CIDR [10.244.1.0/24] 
	I0116 04:29:12.536926       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 04:29:12.536952       1 main.go:227] handling current node
	I0116 04:29:12.536971       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 04:29:12.536977       1 main.go:250] Node multinode-701570-m02 has CIDR [10.244.1.0/24] 
	I0116 04:29:22.550026       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0116 04:29:22.550060       1 main.go:227] handling current node
	I0116 04:29:22.550072       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0116 04:29:22.550078       1 main.go:250] Node multinode-701570-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [00c13145cb3f35a92eff65339c93977b6b72aed27621358591b1182e7ad4f7f3] <==
	I0116 04:28:15.337285       1 cache.go:39] Caches are synced for autoregister controller
	I0116 04:28:15.344275       1 controller.go:624] quota admission added evaluator for: namespaces
	I0116 04:28:15.348890       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0116 04:28:15.358938       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0116 04:28:15.360019       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0116 04:28:15.360496       1 shared_informer.go:318] Caches are synced for configmaps
	I0116 04:28:15.361572       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 04:28:15.361694       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0116 04:28:15.415735       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 04:28:16.029215       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0116 04:28:16.034690       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0116 04:28:16.034714       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 04:28:16.668573       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 04:28:16.714737       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 04:28:16.796854       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0116 04:28:16.802656       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0116 04:28:16.803689       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 04:28:16.808103       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 04:28:17.314760       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 04:28:18.394541       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 04:28:18.405857       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0116 04:28:18.417079       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 04:28:30.905452       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0116 04:28:30.967076       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0116 04:29:25.740522       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400ea8fb30), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400a29a6e0), ResponseWriter:(*httpsnoop.rw)(0x400a29a6e0), Flusher:(*httpsnoop.rw)(0x400a29a6e0), CloseNotifier:(*httpsnoop.rw)(0x400a29a6e0), Pusher:(*httpsnoop.rw)(0x400a29a6e0)}}, encoder:(*versioning.codec)(0x400dede3c0), memAllocator:(*runtime.Allocator)(0x400c11e000)})
	
	
	==> kube-controller-manager [10549e6a995280be53237eecd67110f65e6eab488196ff35b6bcf238c2471621] <==
	I0116 04:28:31.802148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.748µs"
	I0116 04:28:32.673803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.358µs"
	I0116 04:28:32.686947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.751µs"
	I0116 04:28:33.652388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.314815ms"
	I0116 04:28:33.653047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="166.691µs"
	I0116 04:28:35.159130       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0116 04:28:49.068832       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-701570-m02\" does not exist"
	I0116 04:28:49.092356       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-701570-m02" podCIDRs=["10.244.1.0/24"]
	I0116 04:28:49.093338       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-g4kbq"
	I0116 04:28:49.099153       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vfpkz"
	I0116 04:28:50.161607       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-701570-m02"
	I0116 04:28:50.161932       1 event.go:307] "Event occurred" object="multinode-701570-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-701570-m02 event: Registered Node multinode-701570-m02 in Controller"
	I0116 04:29:20.605313       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-701570-m02"
	I0116 04:29:23.117271       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0116 04:29:23.142564       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-v42wl"
	I0116 04:29:23.159109       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-x6w9z"
	I0116 04:29:23.176167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.326305ms"
	I0116 04:29:23.192354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.466434ms"
	I0116 04:29:23.192423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.007µs"
	I0116 04:29:23.199298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.136µs"
	I0116 04:29:25.187589       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-v42wl" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-v42wl"
	I0116 04:29:25.701272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.660218ms"
	I0116 04:29:25.701343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.334µs"
	I0116 04:29:25.732136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.685196ms"
	I0116 04:29:25.732222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.257µs"
	
	
	==> kube-proxy [0d4a90183d4e6e1a7f9e3ac68a99f8496fb2f02396ed7e09c72c1096e8cd62db] <==
	I0116 04:28:32.059312       1 server_others.go:69] "Using iptables proxy"
	I0116 04:28:32.073982       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0116 04:28:32.106050       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0116 04:28:32.109562       1 server_others.go:152] "Using iptables Proxier"
	I0116 04:28:32.109597       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0116 04:28:32.109610       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0116 04:28:32.109682       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 04:28:32.109920       1 server.go:846] "Version info" version="v1.28.4"
	I0116 04:28:32.109936       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 04:28:32.110899       1 config.go:188] "Starting service config controller"
	I0116 04:28:32.110950       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 04:28:32.110969       1 config.go:97] "Starting endpoint slice config controller"
	I0116 04:28:32.110973       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 04:28:32.111474       1 config.go:315] "Starting node config controller"
	I0116 04:28:32.111485       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 04:28:32.211094       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 04:28:32.211103       1 shared_informer.go:318] Caches are synced for service config
	I0116 04:28:32.211634       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3a3730f1746cade98a2787f6bc0b4cc6fe1ff0abffda200290e2431f1faae949] <==
	I0116 04:28:13.539765       1 serving.go:348] Generated self-signed cert in-memory
	I0116 04:28:16.856188       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0116 04:28:16.856225       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 04:28:16.860508       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0116 04:28:16.860645       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0116 04:28:16.860727       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 04:28:16.860786       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 04:28:16.860831       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0116 04:28:16.860861       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0116 04:28:16.861259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 04:28:16.861331       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 04:28:16.961638       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0116 04:28:16.961781       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 04:28:16.961827       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jan 16 04:28:31 multinode-701570 kubelet[1393]: I0116 04:28:31.383112    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1-kube-proxy\") pod \"kube-proxy-zmnvg\" (UID: \"49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1\") " pod="kube-system/kube-proxy-zmnvg"
	Jan 16 04:28:31 multinode-701570 kubelet[1393]: I0116 04:28:31.383165    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1-lib-modules\") pod \"kube-proxy-zmnvg\" (UID: \"49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1\") " pod="kube-system/kube-proxy-zmnvg"
	Jan 16 04:28:31 multinode-701570 kubelet[1393]: I0116 04:28:31.383192    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lvpv\" (UniqueName: \"kubernetes.io/projected/49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1-kube-api-access-8lvpv\") pod \"kube-proxy-zmnvg\" (UID: \"49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1\") " pod="kube-system/kube-proxy-zmnvg"
	Jan 16 04:28:31 multinode-701570 kubelet[1393]: I0116 04:28:31.383217    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9653a16d-c4ad-4021-be3b-8e4292b418fc-xtables-lock\") pod \"kindnet-xkvsh\" (UID: \"9653a16d-c4ad-4021-be3b-8e4292b418fc\") " pod="kube-system/kindnet-xkvsh"
	Jan 16 04:28:31 multinode-701570 kubelet[1393]: I0116 04:28:31.383240    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9653a16d-c4ad-4021-be3b-8e4292b418fc-lib-modules\") pod \"kindnet-xkvsh\" (UID: \"9653a16d-c4ad-4021-be3b-8e4292b418fc\") " pod="kube-system/kindnet-xkvsh"
	Jan 16 04:28:31 multinode-701570 kubelet[1393]: I0116 04:28:31.383266    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9653a16d-c4ad-4021-be3b-8e4292b418fc-cni-cfg\") pod \"kindnet-xkvsh\" (UID: \"9653a16d-c4ad-4021-be3b-8e4292b418fc\") " pod="kube-system/kindnet-xkvsh"
	Jan 16 04:28:31 multinode-701570 kubelet[1393]: I0116 04:28:31.383288    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhn54\" (UniqueName: \"kubernetes.io/projected/9653a16d-c4ad-4021-be3b-8e4292b418fc-kube-api-access-fhn54\") pod \"kindnet-xkvsh\" (UID: \"9653a16d-c4ad-4021-be3b-8e4292b418fc\") " pod="kube-system/kindnet-xkvsh"
	Jan 16 04:28:31 multinode-701570 kubelet[1393]: I0116 04:28:31.383313    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1-xtables-lock\") pod \"kube-proxy-zmnvg\" (UID: \"49fc2d49-9a21-4b2f-afe7-0bbf3a4fa6b1\") " pod="kube-system/kube-proxy-zmnvg"
	Jan 16 04:28:31 multinode-701570 kubelet[1393]: W0116 04:28:31.863748    1393 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0/crio-5ea1fb8e41bb38cd999fb1a9ed230af07221b898f4dc9df9f4b50a97fa37e40b WatchSource:0}: Error finding container 5ea1fb8e41bb38cd999fb1a9ed230af07221b898f4dc9df9f4b50a97fa37e40b: Status 404 returned error can't find the container with id 5ea1fb8e41bb38cd999fb1a9ed230af07221b898f4dc9df9f4b50a97fa37e40b
	Jan 16 04:28:32 multinode-701570 kubelet[1393]: I0116 04:28:32.638481    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xkvsh" podStartSLOduration=2.638437694 podCreationTimestamp="2024-01-16 04:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 04:28:32.638377404 +0000 UTC m=+14.271397589" watchObservedRunningTime="2024-01-16 04:28:32.638437694 +0000 UTC m=+14.271457871"
	Jan 16 04:28:32 multinode-701570 kubelet[1393]: I0116 04:28:32.638609    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zmnvg" podStartSLOduration=1.638591512 podCreationTimestamp="2024-01-16 04:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 04:28:32.623888985 +0000 UTC m=+14.256909161" watchObservedRunningTime="2024-01-16 04:28:32.638591512 +0000 UTC m=+14.271611697"
	Jan 16 04:28:32 multinode-701570 kubelet[1393]: I0116 04:28:32.646791    1393 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 16 04:28:32 multinode-701570 kubelet[1393]: I0116 04:28:32.672655    1393 topology_manager.go:215] "Topology Admit Handler" podUID="0707ad3b-2557-49c2-bdc3-77554baac045" podNamespace="kube-system" podName="coredns-5dd5756b68-hm6kd"
	Jan 16 04:28:32 multinode-701570 kubelet[1393]: I0116 04:28:32.793013    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdvl\" (UniqueName: \"kubernetes.io/projected/0707ad3b-2557-49c2-bdc3-77554baac045-kube-api-access-5gdvl\") pod \"coredns-5dd5756b68-hm6kd\" (UID: \"0707ad3b-2557-49c2-bdc3-77554baac045\") " pod="kube-system/coredns-5dd5756b68-hm6kd"
	Jan 16 04:28:32 multinode-701570 kubelet[1393]: I0116 04:28:32.793069    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0707ad3b-2557-49c2-bdc3-77554baac045-config-volume\") pod \"coredns-5dd5756b68-hm6kd\" (UID: \"0707ad3b-2557-49c2-bdc3-77554baac045\") " pod="kube-system/coredns-5dd5756b68-hm6kd"
	Jan 16 04:28:32 multinode-701570 kubelet[1393]: I0116 04:28:32.988936    1393 topology_manager.go:215] "Topology Admit Handler" podUID="afb9aebf-f18a-478d-b561-54bd61c7403a" podNamespace="kube-system" podName="storage-provisioner"
	Jan 16 04:28:32 multinode-701570 kubelet[1393]: I0116 04:28:32.996864    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/afb9aebf-f18a-478d-b561-54bd61c7403a-tmp\") pod \"storage-provisioner\" (UID: \"afb9aebf-f18a-478d-b561-54bd61c7403a\") " pod="kube-system/storage-provisioner"
	Jan 16 04:28:32 multinode-701570 kubelet[1393]: I0116 04:28:32.996909    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmh5d\" (UniqueName: \"kubernetes.io/projected/afb9aebf-f18a-478d-b561-54bd61c7403a-kube-api-access-rmh5d\") pod \"storage-provisioner\" (UID: \"afb9aebf-f18a-478d-b561-54bd61c7403a\") " pod="kube-system/storage-provisioner"
	Jan 16 04:28:33 multinode-701570 kubelet[1393]: W0116 04:28:33.015363    1393 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0/crio-5203a701575029f93d1d79e7bebd4f6606d2de74689d665c3f0c58874647edb6 WatchSource:0}: Error finding container 5203a701575029f93d1d79e7bebd4f6606d2de74689d665c3f0c58874647edb6: Status 404 returned error can't find the container with id 5203a701575029f93d1d79e7bebd4f6606d2de74689d665c3f0c58874647edb6
	Jan 16 04:28:33 multinode-701570 kubelet[1393]: I0116 04:28:33.640970    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hm6kd" podStartSLOduration=2.640927022 podCreationTimestamp="2024-01-16 04:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 04:28:33.640613274 +0000 UTC m=+15.273633451" watchObservedRunningTime="2024-01-16 04:28:33.640927022 +0000 UTC m=+15.273947199"
	Jan 16 04:28:33 multinode-701570 kubelet[1393]: I0116 04:28:33.641062    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.641042958 podCreationTimestamp="2024-01-16 04:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 04:28:33.628681335 +0000 UTC m=+15.261701520" watchObservedRunningTime="2024-01-16 04:28:33.641042958 +0000 UTC m=+15.274063151"
	Jan 16 04:29:23 multinode-701570 kubelet[1393]: I0116 04:29:23.172028    1393 topology_manager.go:215] "Topology Admit Handler" podUID="abcd8745-b810-4e14-bb12-464bc549bcf9" podNamespace="default" podName="busybox-5bc68d56bd-x6w9z"
	Jan 16 04:29:23 multinode-701570 kubelet[1393]: I0116 04:29:23.288162    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8btx\" (UniqueName: \"kubernetes.io/projected/abcd8745-b810-4e14-bb12-464bc549bcf9-kube-api-access-c8btx\") pod \"busybox-5bc68d56bd-x6w9z\" (UID: \"abcd8745-b810-4e14-bb12-464bc549bcf9\") " pod="default/busybox-5bc68d56bd-x6w9z"
	Jan 16 04:29:23 multinode-701570 kubelet[1393]: W0116 04:29:23.510332    1393 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0/crio-e6ac718f7c2d5dba95e17b7ade93629991e25bc442d306d8d50d267c3db32733 WatchSource:0}: Error finding container e6ac718f7c2d5dba95e17b7ade93629991e25bc442d306d8d50d267c3db32733: Status 404 returned error can't find the container with id e6ac718f7c2d5dba95e17b7ade93629991e25bc442d306d8d50d267c3db32733
	Jan 16 04:29:26 multinode-701570 kubelet[1393]: E0116 04:29:26.633121    1393 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52830->127.0.0.1:46651: write tcp 127.0.0.1:52830->127.0.0.1:46651: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-701570 -n multinode-701570
E0116 04:29:31.345472 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-701570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.97s)

                                                
                                    

Test pass (284/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.45
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
9 TestDownloadOnly/v1.16.0/DeleteAll 0.25
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 8.45
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.25
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.29.0-rc.2/json-events 10.51
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.44
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.39
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.27
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 166.8
38 TestAddons/parallel/Registry 16.77
40 TestAddons/parallel/InspektorGadget 11.02
41 TestAddons/parallel/MetricsServer 6.86
44 TestAddons/parallel/CSI 54.06
45 TestAddons/parallel/Headlamp 11.49
46 TestAddons/parallel/CloudSpanner 5.61
47 TestAddons/parallel/LocalPath 8.82
48 TestAddons/parallel/NvidiaDevicePlugin 5.56
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.37
54 TestCertOptions 39.86
55 TestCertExpiration 257.15
57 TestForceSystemdFlag 38.12
58 TestForceSystemdEnv 38.18
64 TestErrorSpam/setup 29.81
65 TestErrorSpam/start 0.87
66 TestErrorSpam/status 1.19
67 TestErrorSpam/pause 1.94
68 TestErrorSpam/unpause 2.05
69 TestErrorSpam/stop 1.51
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 77.65
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 33.13
76 TestFunctional/serial/KubeContext 0.09
77 TestFunctional/serial/KubectlGetPods 0.13
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.76
81 TestFunctional/serial/CacheCmd/cache/add_local 1.1
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
86 TestFunctional/serial/CacheCmd/cache/delete 0.15
87 TestFunctional/serial/MinikubeKubectlCmd 0.17
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
89 TestFunctional/serial/ExtraConfig 35.16
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.95
92 TestFunctional/serial/LogsFileCmd 1.94
93 TestFunctional/serial/InvalidService 4.84
95 TestFunctional/parallel/ConfigCmd 0.64
96 TestFunctional/parallel/DashboardCmd 13.2
97 TestFunctional/parallel/DryRun 0.67
98 TestFunctional/parallel/InternationalLanguage 0.27
99 TestFunctional/parallel/StatusCmd 1.24
103 TestFunctional/parallel/ServiceCmdConnect 10.88
104 TestFunctional/parallel/AddonsCmd 0.28
105 TestFunctional/parallel/PersistentVolumeClaim 24.73
107 TestFunctional/parallel/SSHCmd 0.89
108 TestFunctional/parallel/CpCmd 2.31
110 TestFunctional/parallel/FileSync 0.4
111 TestFunctional/parallel/CertSync 2.57
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.85
119 TestFunctional/parallel/License 0.37
120 TestFunctional/parallel/Version/short 0.08
121 TestFunctional/parallel/Version/components 1.46
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.11
127 TestFunctional/parallel/ImageCommands/Setup 1.75
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.92
132 TestFunctional/parallel/ServiceCmd/DeployApp 11.29
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.24
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.05
135 TestFunctional/parallel/ServiceCmd/List 0.45
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.61
138 TestFunctional/parallel/ServiceCmd/Format 0.49
139 TestFunctional/parallel/ServiceCmd/URL 0.51
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.12
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.82
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.67
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.63
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.59
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
156 TestFunctional/parallel/ProfileCmd/profile_list 0.44
157 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
158 TestFunctional/parallel/MountCmd/any-port 7.92
159 TestFunctional/parallel/MountCmd/specific-port 2.7
160 TestFunctional/parallel/MountCmd/VerifyCleanup 3.71
161 TestFunctional/delete_addon-resizer_images 0.09
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 91.82
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.5
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.71
174 TestJSONOutput/start/Command 51.21
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.84
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.77
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.83
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.27
199 TestKicCustomNetwork/create_custom_network 49.2
200 TestKicCustomNetwork/use_default_bridge_network 34.63
201 TestKicExistingNetwork 34.15
202 TestKicCustomSubnet 33.64
203 TestKicStaticIP 31.83
204 TestMainNoArgs 0.07
205 TestMinikubeProfile 68.9
208 TestMountStart/serial/StartWithMountFirst 6.89
209 TestMountStart/serial/VerifyMountFirst 0.32
210 TestMountStart/serial/StartWithMountSecond 9.61
211 TestMountStart/serial/VerifyMountSecond 0.29
212 TestMountStart/serial/DeleteFirst 1.67
213 TestMountStart/serial/VerifyMountPostDelete 0.31
214 TestMountStart/serial/Stop 1.22
215 TestMountStart/serial/RestartStopped 7.94
216 TestMountStart/serial/VerifyMountPostStop 0.31
219 TestMultiNode/serial/FreshStart2Nodes 92.75
220 TestMultiNode/serial/DeployApp2Nodes 4.8
222 TestMultiNode/serial/AddNode 50.44
223 TestMultiNode/serial/MultiNodeLabels 0.09
224 TestMultiNode/serial/ProfileList 0.35
225 TestMultiNode/serial/CopyFile 11.4
226 TestMultiNode/serial/StopNode 2.4
227 TestMultiNode/serial/StartAfterStop 13.37
228 TestMultiNode/serial/RestartKeepsNodes 122.33
229 TestMultiNode/serial/DeleteNode 5.2
230 TestMultiNode/serial/StopMultiNode 24.05
231 TestMultiNode/serial/RestartMultiNode 79.42
232 TestMultiNode/serial/ValidateNameConflict 34.18
237 TestPreload 170.72
239 TestScheduledStopUnix 107.36
242 TestInsufficientStorage 11.11
243 TestRunningBinaryUpgrade 79.4
245 TestKubernetesUpgrade 420.74
246 TestMissingContainerUpgrade 159.99
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
249 TestNoKubernetes/serial/StartWithK8s 44.69
250 TestNoKubernetes/serial/StartWithStopK8s 29.21
251 TestNoKubernetes/serial/Start 9.74
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
253 TestNoKubernetes/serial/ProfileList 3.65
254 TestNoKubernetes/serial/Stop 1.23
255 TestNoKubernetes/serial/StartNoArgs 7.86
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
257 TestStoppedBinaryUpgrade/Setup 1.23
258 TestStoppedBinaryUpgrade/Upgrade 74.43
259 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
268 TestPause/serial/Start 54.28
269 TestPause/serial/SecondStartNoReconfiguration 30.67
270 TestPause/serial/Pause 0.83
271 TestPause/serial/VerifyStatus 0.4
272 TestPause/serial/Unpause 0.99
273 TestPause/serial/PauseAgain 1.06
274 TestPause/serial/DeletePaused 3.19
275 TestPause/serial/VerifyDeletedResources 0.37
283 TestNetworkPlugins/group/false 6.38
288 TestStartStop/group/old-k8s-version/serial/FirstStart 133.45
289 TestStartStop/group/old-k8s-version/serial/DeployApp 9.5
290 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
291 TestStartStop/group/old-k8s-version/serial/Stop 12.03
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
293 TestStartStop/group/old-k8s-version/serial/SecondStart 427.13
295 TestStartStop/group/no-preload/serial/FirstStart 68.02
296 TestStartStop/group/no-preload/serial/DeployApp 8.36
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
298 TestStartStop/group/no-preload/serial/Stop 12.05
299 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
300 TestStartStop/group/no-preload/serial/SecondStart 630.04
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
304 TestStartStop/group/old-k8s-version/serial/Pause 4.57
306 TestStartStop/group/embed-certs/serial/FirstStart 83.14
307 TestStartStop/group/embed-certs/serial/DeployApp 8.36
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
309 TestStartStop/group/embed-certs/serial/Stop 12.06
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
311 TestStartStop/group/embed-certs/serial/SecondStart 601.96
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
315 TestStartStop/group/no-preload/serial/Pause 3.61
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.3
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.39
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.54
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.38
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 627.17
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
326 TestStartStop/group/embed-certs/serial/Pause 3.51
328 TestStartStop/group/newest-cni/serial/FirstStart 51.45
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.18
331 TestStartStop/group/newest-cni/serial/Stop 1.26
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
333 TestStartStop/group/newest-cni/serial/SecondStart 31.71
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
337 TestStartStop/group/newest-cni/serial/Pause 3.46
338 TestNetworkPlugins/group/auto/Start 78.24
339 TestNetworkPlugins/group/auto/KubeletFlags 0.34
340 TestNetworkPlugins/group/auto/NetCatPod 12.32
341 TestNetworkPlugins/group/auto/DNS 0.19
342 TestNetworkPlugins/group/auto/Localhost 0.17
343 TestNetworkPlugins/group/auto/HairPin 0.18
344 TestNetworkPlugins/group/kindnet/Start 50.57
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
347 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
348 TestNetworkPlugins/group/kindnet/DNS 0.21
349 TestNetworkPlugins/group/kindnet/Localhost 0.17
350 TestNetworkPlugins/group/kindnet/HairPin 0.17
351 TestNetworkPlugins/group/calico/Start 79.1
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.38
356 TestNetworkPlugins/group/custom-flannel/Start 71.55
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.41
359 TestNetworkPlugins/group/calico/NetCatPod 11.34
360 TestNetworkPlugins/group/calico/DNS 0.33
361 TestNetworkPlugins/group/calico/Localhost 0.25
362 TestNetworkPlugins/group/calico/HairPin 0.28
363 TestNetworkPlugins/group/enable-default-cni/Start 89.13
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.33
366 TestNetworkPlugins/group/custom-flannel/DNS 0.33
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.31
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.29
369 TestNetworkPlugins/group/flannel/Start 67.62
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
375 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/bridge/Start 93.5
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
378 TestNetworkPlugins/group/flannel/NetCatPod 11.36
379 TestNetworkPlugins/group/flannel/DNS 0.25
380 TestNetworkPlugins/group/flannel/Localhost 0.24
381 TestNetworkPlugins/group/flannel/HairPin 0.23
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
383 TestNetworkPlugins/group/bridge/NetCatPod 10.26
384 TestNetworkPlugins/group/bridge/DNS 0.19
385 TestNetworkPlugins/group/bridge/Localhost 0.18
386 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (9.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-320084 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-320084 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.45298057s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-320084
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-320084: exit status 85 (100.983478ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-320084 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |          |
	|         | -p download-only-320084        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 04:05:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 04:05:27.520804 2421011 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:05:27.521016 2421011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:05:27.521042 2421011 out.go:309] Setting ErrFile to fd 2...
	I0116 04:05:27.521062 2421011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:05:27.521351 2421011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	W0116 04:05:27.521541 2421011 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17965-2415678/.minikube/config/config.json: open /home/jenkins/minikube-integration/17965-2415678/.minikube/config/config.json: no such file or directory
	I0116 04:05:27.521995 2421011 out.go:303] Setting JSON to true
	I0116 04:05:27.522908 2421011 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38858,"bootTime":1705339069,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0116 04:05:27.523003 2421011 start.go:138] virtualization:  
	I0116 04:05:27.525832 2421011 out.go:97] [download-only-320084] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 04:05:27.527931 2421011 out.go:169] MINIKUBE_LOCATION=17965
	W0116 04:05:27.526072 2421011 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball: no such file or directory
	I0116 04:05:27.526163 2421011 notify.go:220] Checking for updates...
	I0116 04:05:27.531690 2421011 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:05:27.533632 2421011 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:05:27.535897 2421011 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	I0116 04:05:27.537914 2421011 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0116 04:05:27.542073 2421011 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 04:05:27.542354 2421011 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:05:27.565354 2421011 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:05:27.565463 2421011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:05:27.647783 2421011 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-16 04:05:27.637456721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:05:27.647895 2421011 docker.go:295] overlay module found
	I0116 04:05:27.650186 2421011 out.go:97] Using the docker driver based on user configuration
	I0116 04:05:27.650213 2421011 start.go:298] selected driver: docker
	I0116 04:05:27.650220 2421011 start.go:902] validating driver "docker" against <nil>
	I0116 04:05:27.650329 2421011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:05:27.720108 2421011 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-16 04:05:27.710951617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:05:27.720265 2421011 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 04:05:27.720529 2421011 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0116 04:05:27.720700 2421011 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 04:05:27.722691 2421011 out.go:169] Using Docker driver with root privileges
	I0116 04:05:27.724542 2421011 cni.go:84] Creating CNI manager for ""
	I0116 04:05:27.724562 2421011 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 04:05:27.724574 2421011 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 04:05:27.724594 2421011 start_flags.go:321] config:
	{Name:download-only-320084 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-320084 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:05:27.727074 2421011 out.go:97] Starting control plane node download-only-320084 in cluster download-only-320084
	I0116 04:05:27.727095 2421011 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 04:05:27.729119 2421011 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 04:05:27.729146 2421011 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 04:05:27.729199 2421011 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 04:05:27.754602 2421011 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 04:05:27.754835 2421011 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 04:05:27.754939 2421011 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 04:05:27.792435 2421011 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0116 04:05:27.792459 2421011 cache.go:56] Caching tarball of preloaded images
	I0116 04:05:27.793181 2421011 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0116 04:05:27.795466 2421011 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 04:05:27.795488 2421011 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0116 04:05:27.907681 2421011 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-320084"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-320084
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (8.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-925235 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-925235 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.448215232s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (8.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-925235
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-925235: exit status 85 (88.251502ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-320084 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |                     |
	|         | -p download-only-320084        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| delete  | -p download-only-320084        | download-only-320084 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| start   | -o=json --download-only        | download-only-925235 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |                     |
	|         | -p download-only-925235        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 04:05:37
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 04:05:37.475546 2421175 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:05:37.475786 2421175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:05:37.475813 2421175 out.go:309] Setting ErrFile to fd 2...
	I0116 04:05:37.475834 2421175 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:05:37.476134 2421175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:05:37.476591 2421175 out.go:303] Setting JSON to true
	I0116 04:05:37.477597 2421175 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38868,"bootTime":1705339069,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0116 04:05:37.477700 2421175 start.go:138] virtualization:  
	I0116 04:05:37.480018 2421175 out.go:97] [download-only-925235] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 04:05:37.482380 2421175 out.go:169] MINIKUBE_LOCATION=17965
	I0116 04:05:37.480314 2421175 notify.go:220] Checking for updates...
	I0116 04:05:37.486217 2421175 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:05:37.488243 2421175 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:05:37.490155 2421175 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	I0116 04:05:37.492119 2421175 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0116 04:05:37.496112 2421175 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 04:05:37.496379 2421175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:05:37.520301 2421175 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:05:37.520425 2421175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:05:37.604430 2421175 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-16 04:05:37.594867342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:05:37.604531 2421175 docker.go:295] overlay module found
	I0116 04:05:37.606993 2421175 out.go:97] Using the docker driver based on user configuration
	I0116 04:05:37.607023 2421175 start.go:298] selected driver: docker
	I0116 04:05:37.607031 2421175 start.go:902] validating driver "docker" against <nil>
	I0116 04:05:37.607142 2421175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:05:37.681858 2421175 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-16 04:05:37.671516391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:05:37.682021 2421175 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 04:05:37.682384 2421175 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0116 04:05:37.682548 2421175 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 04:05:37.685007 2421175 out.go:169] Using Docker driver with root privileges
	I0116 04:05:37.687410 2421175 cni.go:84] Creating CNI manager for ""
	I0116 04:05:37.687438 2421175 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 04:05:37.687449 2421175 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 04:05:37.687464 2421175 start_flags.go:321] config:
	{Name:download-only-925235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-925235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:05:37.689715 2421175 out.go:97] Starting control plane node download-only-925235 in cluster download-only-925235
	I0116 04:05:37.689735 2421175 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 04:05:37.691722 2421175 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 04:05:37.691746 2421175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:05:37.691877 2421175 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 04:05:37.708832 2421175 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 04:05:37.708976 2421175 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 04:05:37.708995 2421175 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 04:05:37.709000 2421175 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 04:05:37.709007 2421175 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 04:05:37.743085 2421175 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I0116 04:05:37.743110 2421175 cache.go:56] Caching tarball of preloaded images
	I0116 04:05:37.743270 2421175 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0116 04:05:37.745634 2421175 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0116 04:05:37.745663 2421175 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I0116 04:05:37.853656 2421175 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-925235"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-925235
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (10.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-859041 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-859041 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.510361178s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (10.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-859041
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-859041: exit status 85 (441.445371ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-320084 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |                     |
	|         | -p download-only-320084           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| delete  | -p download-only-320084           | download-only-320084 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| start   | -o=json --download-only           | download-only-925235 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |                     |
	|         | -p download-only-925235           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| delete  | -p download-only-925235           | download-only-925235 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC | 16 Jan 24 04:05 UTC |
	| start   | -o=json --download-only           | download-only-859041 | jenkins | v1.32.0 | 16 Jan 24 04:05 UTC |                     |
	|         | -p download-only-859041           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 04:05:46
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 04:05:46.435285 2421335 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:05:46.435442 2421335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:05:46.435451 2421335 out.go:309] Setting ErrFile to fd 2...
	I0116 04:05:46.435462 2421335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:05:46.435785 2421335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:05:46.436251 2421335 out.go:303] Setting JSON to true
	I0116 04:05:46.437185 2421335 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38877,"bootTime":1705339069,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0116 04:05:46.437259 2421335 start.go:138] virtualization:  
	I0116 04:05:46.439832 2421335 out.go:97] [download-only-859041] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 04:05:46.441819 2421335 out.go:169] MINIKUBE_LOCATION=17965
	I0116 04:05:46.440126 2421335 notify.go:220] Checking for updates...
	I0116 04:05:46.443856 2421335 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:05:46.445977 2421335 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:05:46.448068 2421335 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	I0116 04:05:46.449934 2421335 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0116 04:05:46.453489 2421335 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 04:05:46.453763 2421335 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:05:46.477932 2421335 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:05:46.478040 2421335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:05:46.576351 2421335 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-16 04:05:46.565218688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:05:46.576459 2421335 docker.go:295] overlay module found
	I0116 04:05:46.578436 2421335 out.go:97] Using the docker driver based on user configuration
	I0116 04:05:46.578467 2421335 start.go:298] selected driver: docker
	I0116 04:05:46.578474 2421335 start.go:902] validating driver "docker" against <nil>
	I0116 04:05:46.578574 2421335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:05:46.652044 2421335 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-16 04:05:46.641771311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:05:46.652212 2421335 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 04:05:46.652484 2421335 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0116 04:05:46.652654 2421335 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 04:05:46.654927 2421335 out.go:169] Using Docker driver with root privileges
	I0116 04:05:46.657019 2421335 cni.go:84] Creating CNI manager for ""
	I0116 04:05:46.657042 2421335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0116 04:05:46.657056 2421335 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 04:05:46.657068 2421335 start_flags.go:321] config:
	{Name:download-only-859041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-859041 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:05:46.659180 2421335 out.go:97] Starting control plane node download-only-859041 in cluster download-only-859041
	I0116 04:05:46.659203 2421335 cache.go:121] Beginning downloading kic base image for docker with crio
	I0116 04:05:46.661104 2421335 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0116 04:05:46.661139 2421335 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 04:05:46.661247 2421335 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0116 04:05:46.682527 2421335 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0116 04:05:46.682655 2421335 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0116 04:05:46.682673 2421335 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0116 04:05:46.682677 2421335 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0116 04:05:46.682684 2421335 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0116 04:05:46.724368 2421335 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0116 04:05:46.724398 2421335 cache.go:56] Caching tarball of preloaded images
	I0116 04:05:46.725110 2421335 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 04:05:46.727314 2421335 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0116 04:05:46.727334 2421335 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0116 04:05:46.838962 2421335 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:9d8119c6fd5c58f71de57a6fdbe27bf3 -> /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4
	I0116 04:05:55.161215 2421335 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0116 04:05:55.161329 2421335 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-arm64.tar.lz4 ...
	I0116 04:05:56.037086 2421335 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0116 04:05:56.037641 2421335 profile.go:148] Saving config to /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/download-only-859041/config.json ...
	I0116 04:05:56.037697 2421335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/download-only-859041/config.json: {Name:mk8fcbdd58017bc1529293f198fb481bdb1de2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 04:05:56.037945 2421335 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0116 04:05:56.038141 2421335 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17965-2415678/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-859041"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-859041
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.27s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-061244 --alsologtostderr --binary-mirror http://127.0.0.1:41211 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-061244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-061244
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-775662
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-775662: exit status 85 (96.785628ms)

                                                
                                                
-- stdout --
	* Profile "addons-775662" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-775662"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-775662
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-775662: exit status 85 (87.875373ms)

                                                
                                                
-- stdout --
	* Profile "addons-775662" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-775662"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (166.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-775662 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-775662 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m46.803931242s)
--- PASS: TestAddons/Setup (166.80s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 45.136159ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fshp9" [21ff04cb-d00c-4e3c-ae50-b7c1be39cb71] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005275896s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ljm2x" [8d2e9319-cfe5-4357-8b2b-7e474e6d5b12] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004455467s
addons_test.go:340: (dbg) Run:  kubectl --context addons-775662 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-775662 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-775662 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.226052443s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 ip
2024/01/16 04:09:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kppk2" [ab6a9f03-dc2d-4d17-98fe-f8a4af1f1d0c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004859338s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-775662
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-775662: (6.017515812s)
--- PASS: TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.424305ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-dtqwj" [d8578bd2-418d-4fcd-ac58-07a6188c73ed] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00467321s
addons_test.go:415: (dbg) Run:  kubectl --context addons-775662 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 45.301898ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-775662 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-775662 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d13891ff-2c7c-4890-b92b-14293b5e9562] Pending
helpers_test.go:344: "task-pv-pod" [d13891ff-2c7c-4890-b92b-14293b5e9562] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d13891ff-2c7c-4890-b92b-14293b5e9562] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004714037s
addons_test.go:584: (dbg) Run:  kubectl --context addons-775662 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-775662 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-775662 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-775662 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-775662 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-775662 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-775662 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b0635108-6e4c-457b-a77a-3a631d61ce25] Pending
helpers_test.go:344: "task-pv-pod-restore" [b0635108-6e4c-457b-a77a-3a631d61ce25] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b0635108-6e4c-457b-a77a-3a631d61ce25] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004453868s
addons_test.go:626: (dbg) Run:  kubectl --context addons-775662 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-775662 delete pod task-pv-pod-restore: (1.347387009s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-775662 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-775662 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-775662 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.860346345s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-775662 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-775662 --alsologtostderr -v=1: (1.484168825s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-r7cpl" [2b6aee66-a743-4e74-8625-11c66bd49fb0] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-r7cpl" [2b6aee66-a743-4e74-8625-11c66bd49fb0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-r7cpl" [2b6aee66-a743-4e74-8625-11c66bd49fb0] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004305129s
--- PASS: TestAddons/parallel/Headlamp (11.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-c78gt" [158d1ed5-ffe1-47b4-aa72-ae9b696829c6] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004430882s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-775662
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-775662 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-775662 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-775662 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ba6df939-333b-4e34-a0e7-84efff0b489a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ba6df939-333b-4e34-a0e7-84efff0b489a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ba6df939-333b-4e34-a0e7-84efff0b489a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00441425s
addons_test.go:891: (dbg) Run:  kubectl --context addons-775662 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 ssh "cat /opt/local-path-provisioner/pvc-4b62521c-5878-4383-9538-7633795decd3_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-775662 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-775662 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-775662 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gb8vg" [94232484-f9f3-41aa-9cb0-026faa3e71df] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00451431s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-775662
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-qzzqr" [2cb80c5d-35ec-4e97-a8c8-323e00bc79d2] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004078679s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-775662 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-775662 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-775662
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-775662: (12.041135892s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-775662
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-775662
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-775662
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (39.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-165621 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0116 04:49:03.660916 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-165621 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.982017548s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-165621 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-165621 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-165621 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-165621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-165621
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-165621: (2.045889384s)
--- PASS: TestCertOptions (39.86s)

                                                
                                    
x
+
TestCertExpiration (257.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-332335 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0116 04:48:46.727671 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-332335 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.845087304s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-332335 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-332335 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (32.565915865s)
helpers_test.go:175: Cleaning up "cert-expiration-332335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-332335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-332335: (2.73623835s)
--- PASS: TestCertExpiration (257.15s)

                                                
                                    
x
+
TestForceSystemdFlag (38.12s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-760588 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-760588 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.516320038s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-760588 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-760588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-760588
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-760588: (2.216264502s)
--- PASS: TestForceSystemdFlag (38.12s)

                                                
                                    
x
+
TestForceSystemdEnv (38.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-720978 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-720978 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.516074997s)
helpers_test.go:175: Cleaning up "force-systemd-env-720978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-720978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-720978: (2.658271996s)
--- PASS: TestForceSystemdEnv (38.18s)

                                                
                                    
x
+
TestErrorSpam/setup (29.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-688820 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-688820 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-688820 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-688820 --driver=docker  --container-runtime=crio: (29.809913306s)
--- PASS: TestErrorSpam/setup (29.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.05s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 unpause
--- PASS: TestErrorSpam/unpause (2.05s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 stop: (1.273383067s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688820 --log_dir /tmp/nospam-688820 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17965-2415678/.minikube/files/etc/test/nested/copy/2421005/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-032172 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0116 04:13:46.730529 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:46.737933 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:46.748184 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:46.768460 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:46.808736 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:46.889067 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:47.050131 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:47.370684 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:48.010951 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:49.291208 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:51.852891 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:13:56.973237 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:14:07.213441 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:14:27.694192 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-032172 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.647996616s)
--- PASS: TestFunctional/serial/StartWithProxy (77.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-032172 --alsologtostderr -v=8
E0116 04:15:08.655091 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-032172 --alsologtostderr -v=8: (33.127357465s)
functional_test.go:659: soft start took 33.129155844s for "functional-032172" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-032172 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 cache add registry.k8s.io/pause:3.1: (1.247021856s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 cache add registry.k8s.io/pause:3.3: (1.265697115s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 cache add registry.k8s.io/pause:latest: (1.243702318s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-032172 /tmp/TestFunctionalserialCacheCmdcacheadd_local3032527062/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 cache add minikube-local-cache-test:functional-032172
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 cache delete minikube-local-cache-test:functional-032172
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-032172
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (341.375172ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 cache reload: (1.03594883s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 kubectl -- --context functional-032172 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-032172 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-032172 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-032172 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.154988377s)
functional_test.go:757: restart took 35.155086443s for "functional-032172" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-032172 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 logs: (1.954360629s)
--- PASS: TestFunctional/serial/LogsCmd (1.95s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 logs --file /tmp/TestFunctionalserialLogsFileCmd1453821348/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 logs --file /tmp/TestFunctionalserialLogsFileCmd1453821348/001/logs.txt: (1.935935678s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.94s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-032172 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-032172
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-032172: exit status 115 (614.084509ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30539 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-032172 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 config get cpus: exit status 14 (135.675219ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 config get cpus: exit status 14 (107.001944ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-032172 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-032172 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2447042: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-032172 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-032172 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (323.591606ms)

                                                
                                                
-- stdout --
	* [functional-032172] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 04:17:00.719911 2446569 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:17:00.720134 2446569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:17:00.720163 2446569 out.go:309] Setting ErrFile to fd 2...
	I0116 04:17:00.720184 2446569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:17:00.720538 2446569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:17:00.721306 2446569 out.go:303] Setting JSON to false
	I0116 04:17:00.723352 2446569 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39552,"bootTime":1705339069,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0116 04:17:00.723517 2446569 start.go:138] virtualization:  
	I0116 04:17:00.729342 2446569 out.go:177] * [functional-032172] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 04:17:00.731439 2446569 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 04:17:00.731527 2446569 notify.go:220] Checking for updates...
	I0116 04:17:00.733617 2446569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:17:00.735874 2446569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:17:00.738259 2446569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	I0116 04:17:00.740476 2446569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 04:17:00.742645 2446569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:17:00.745534 2446569 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:17:00.746342 2446569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:17:00.802356 2446569 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:17:00.802493 2446569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:17:00.939655 2446569 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-16 04:17:00.928181501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:17:00.939781 2446569 docker.go:295] overlay module found
	I0116 04:17:00.943585 2446569 out.go:177] * Using the docker driver based on existing profile
	I0116 04:17:00.945617 2446569 start.go:298] selected driver: docker
	I0116 04:17:00.945665 2446569 start.go:902] validating driver "docker" against &{Name:functional-032172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-032172 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:17:00.945894 2446569 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:17:00.948515 2446569 out.go:177] 
	W0116 04:17:00.950524 2446569 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0116 04:17:00.952685 2446569 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-032172 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-032172 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-032172 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (265.321487ms)

                                                
                                                
-- stdout --
	* [functional-032172] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 04:17:01.380665 2446720 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:17:01.380905 2446720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:17:01.380915 2446720 out.go:309] Setting ErrFile to fd 2...
	I0116 04:17:01.380921 2446720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:17:01.381768 2446720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:17:01.382264 2446720 out.go:303] Setting JSON to false
	I0116 04:17:01.383407 2446720 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39552,"bootTime":1705339069,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0116 04:17:01.383575 2446720 start.go:138] virtualization:  
	I0116 04:17:01.386032 2446720 out.go:177] * [functional-032172] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0116 04:17:01.389163 2446720 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 04:17:01.391066 2446720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:17:01.389450 2446720 notify.go:220] Checking for updates...
	I0116 04:17:01.395855 2446720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:17:01.398251 2446720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	I0116 04:17:01.401029 2446720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 04:17:01.403612 2446720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:17:01.407157 2446720 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:17:01.407785 2446720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:17:01.438480 2446720 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:17:01.438641 2446720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:17:01.546145 2446720 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-16 04:17:01.527160367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:17:01.546257 2446720 docker.go:295] overlay module found
	I0116 04:17:01.548692 2446720 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0116 04:17:01.551086 2446720 start.go:298] selected driver: docker
	I0116 04:17:01.551116 2446720 start.go:902] validating driver "docker" against &{Name:functional-032172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-032172 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 04:17:01.551298 2446720 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:17:01.554643 2446720 out.go:177] 
	W0116 04:17:01.556808 2446720 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0116 04:17:01.559134 2446720 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-032172 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-032172 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-67chm" [f47fce9e-c7c5-4058-a5b6-6f9c6c0e0861] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-67chm" [f47fce9e-c7c5-4058-a5b6-6f9c6c0e0861] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004195936s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32584
functional_test.go:1674: http://192.168.49.2:32584: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-67chm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32584
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0cb8f23c-d2b4-4134-bd3e-8e4bd76f5429] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004138519s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-032172 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-032172 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-032172 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-032172 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1c04dc6b-0bdc-4ba0-ba26-74ddcbd6a22c] Pending
helpers_test.go:344: "sp-pod" [1c04dc6b-0bdc-4ba0-ba26-74ddcbd6a22c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1c04dc6b-0bdc-4ba0-ba26-74ddcbd6a22c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003873767s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-032172 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-032172 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-032172 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2ca83077-85ff-4003-b00e-8229b078df0e] Pending
helpers_test.go:344: "sp-pod" [2ca83077-85ff-4003-b00e-8229b078df0e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2ca83077-85ff-4003-b00e-8229b078df0e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.018359586s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-032172 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh -n functional-032172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 cp functional-032172:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2182284366/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh -n functional-032172 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh -n functional-032172 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/2421005/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo cat /etc/test/nested/copy/2421005/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/2421005.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo cat /etc/ssl/certs/2421005.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/2421005.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo cat /usr/share/ca-certificates/2421005.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/24210052.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo cat /etc/ssl/certs/24210052.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/24210052.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo cat /usr/share/ca-certificates/24210052.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-032172 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 ssh "sudo systemctl is-active docker": exit status 1 (378.740648ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 ssh "sudo systemctl is-active containerd": exit status 1 (472.627135ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 version -o=json --components: (1.464052729s)
--- PASS: TestFunctional/parallel/Version/components (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-032172 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-032172
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-032172 image ls --format short --alsologtostderr:
I0116 04:17:12.857309 2448081 out.go:296] Setting OutFile to fd 1 ...
I0116 04:17:12.857474 2448081 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:12.857485 2448081 out.go:309] Setting ErrFile to fd 2...
I0116 04:17:12.857491 2448081 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:12.857751 2448081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
I0116 04:17:12.858451 2448081 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:12.858607 2448081 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:12.859137 2448081 cli_runner.go:164] Run: docker container inspect functional-032172 --format={{.State.Status}}
I0116 04:17:12.878298 2448081 ssh_runner.go:195] Run: systemctl --version
I0116 04:17:12.878359 2448081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032172
I0116 04:17:12.903202 2448081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35326 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/functional-032172/id_rsa Username:docker}
I0116 04:17:12.998618 2448081 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-032172 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | 74077e780ec71 | 45.3MB |
| gcr.io/google-containers/addon-resizer  | functional-032172  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| docker.io/library/nginx                 | latest             | 6c7be49d2a11c | 196MB  |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-032172 image ls --format table --alsologtostderr:
I0116 04:17:15.289194 2448274 out.go:296] Setting OutFile to fd 1 ...
I0116 04:17:15.289454 2448274 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:15.289463 2448274 out.go:309] Setting ErrFile to fd 2...
I0116 04:17:15.289469 2448274 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:15.289735 2448274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
I0116 04:17:15.290459 2448274 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:15.290621 2448274 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:15.291212 2448274 cli_runner.go:164] Run: docker container inspect functional-032172 --format={{.State.Status}}
I0116 04:17:15.312287 2448274 ssh_runner.go:195] Run: systemctl --version
I0116 04:17:15.312345 2448274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032172
I0116 04:17:15.340550 2448274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35326 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/functional-032172/id_rsa Username:docker}
I0116 04:17:15.439002 2448274 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-032172 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-032172
size: "34114467"
- id: 74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:7913e8fa2e6a5f0160a5e6b7ea48b7d4a301c6058d63c3d632a35a59093cb4eb
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "45330189"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 6c7be49d2a11cfab9a87362ad27d447b45931e43dfa6919a8e1398ec09c1e353
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
- docker.io/library/nginx@sha256:523c417937604bc107d799e5cad1ae2ca8a9fd46306634fa2c547dc6220ec17c
repoTags:
- docker.io/library/nginx:latest
size: "196113558"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-032172 image ls --format yaml --alsologtostderr:
I0116 04:17:13.127138 2448107 out.go:296] Setting OutFile to fd 1 ...
I0116 04:17:13.127329 2448107 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:13.127342 2448107 out.go:309] Setting ErrFile to fd 2...
I0116 04:17:13.127349 2448107 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:13.127638 2448107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
I0116 04:17:13.128409 2448107 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:13.128602 2448107 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:13.129230 2448107 cli_runner.go:164] Run: docker container inspect functional-032172 --format={{.State.Status}}
I0116 04:17:13.148899 2448107 ssh_runner.go:195] Run: systemctl --version
I0116 04:17:13.148960 2448107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032172
I0116 04:17:13.168441 2448107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35326 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/functional-032172/id_rsa Username:docker}
I0116 04:17:13.266626 2448107 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 ssh pgrep buildkitd: exit status 1 (299.851581ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image build -t localhost/my-image:functional-032172 testdata/build --alsologtostderr
2024/01/16 04:17:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 image build -t localhost/my-image:functional-032172 testdata/build --alsologtostderr: (2.500939347s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-032172 image build -t localhost/my-image:functional-032172 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 341e9fef15e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-032172
--> d09b0f79f26
Successfully tagged localhost/my-image:functional-032172
d09b0f79f26cf35a4bc9b29eb1ecda6e901d737cf9e6446aceaf5b19543f6fd6
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-032172 image build -t localhost/my-image:functional-032172 testdata/build --alsologtostderr:
I0116 04:17:13.693247 2448184 out.go:296] Setting OutFile to fd 1 ...
I0116 04:17:13.694087 2448184 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:13.694098 2448184 out.go:309] Setting ErrFile to fd 2...
I0116 04:17:13.694105 2448184 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 04:17:13.694389 2448184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
I0116 04:17:13.695106 2448184 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:13.695746 2448184 config.go:182] Loaded profile config "functional-032172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0116 04:17:13.696543 2448184 cli_runner.go:164] Run: docker container inspect functional-032172 --format={{.State.Status}}
I0116 04:17:13.722916 2448184 ssh_runner.go:195] Run: systemctl --version
I0116 04:17:13.722979 2448184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-032172
I0116 04:17:13.742214 2448184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35326 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/functional-032172/id_rsa Username:docker}
I0116 04:17:13.842859 2448184 build_images.go:151] Building image from path: /tmp/build.694907618.tar
I0116 04:17:13.842968 2448184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 04:17:13.854475 2448184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.694907618.tar
I0116 04:17:13.859381 2448184 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.694907618.tar: stat -c "%s %y" /var/lib/minikube/build/build.694907618.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.694907618.tar': No such file or directory
I0116 04:17:13.859421 2448184 ssh_runner.go:362] scp /tmp/build.694907618.tar --> /var/lib/minikube/build/build.694907618.tar (3072 bytes)
I0116 04:17:13.891266 2448184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.694907618
I0116 04:17:13.904029 2448184 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.694907618 -xf /var/lib/minikube/build/build.694907618.tar
I0116 04:17:13.918779 2448184 crio.go:297] Building image: /var/lib/minikube/build/build.694907618
I0116 04:17:13.918861 2448184 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-032172 /var/lib/minikube/build/build.694907618 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0116 04:17:16.090209 2448184 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-032172 /var/lib/minikube/build/build.694907618 --cgroup-manager=cgroupfs: (2.171323549s)
I0116 04:17:16.090283 2448184 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.694907618
I0116 04:17:16.102785 2448184 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.694907618.tar
I0116 04:17:16.114801 2448184 build_images.go:207] Built localhost/my-image:functional-032172 from /tmp/build.694907618.tar
I0116 04:17:16.114843 2448184 build_images.go:123] succeeded building to: functional-032172
I0116 04:17:16.114848 2448184 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.721592047s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-032172
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image load --daemon gcr.io/google-containers/addon-resizer:functional-032172 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 image load --daemon gcr.io/google-containers/addon-resizer:functional-032172 --alsologtostderr: (5.129319941s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-032172 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-032172 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-n8qp7" [1c6d053b-7dc2-4ce7-9656-59f7cd083ab5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-n8qp7" [1c6d053b-7dc2-4ce7-9656-59f7cd083ab5] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004921954s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image load --daemon gcr.io/google-containers/addon-resizer:functional-032172 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 image load --daemon gcr.io/google-containers/addon-resizer:functional-032172 --alsologtostderr: (2.980273529s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.600402376s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-032172
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image load --daemon gcr.io/google-containers/addon-resizer:functional-032172 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 image load --daemon gcr.io/google-containers/addon-resizer:functional-032172 --alsologtostderr: (4.076663704s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls
E0116 04:16:30.575212 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 service list -o json
functional_test.go:1493: Took "497.653322ms" to run "out/minikube-linux-arm64 -p functional-032172 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31035
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31035
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image save gcr.io/google-containers/addon-resizer:functional-032172 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 image save gcr.io/google-containers/addon-resizer:functional-032172 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.120405123s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image rm gcr.io/google-containers/addon-resizer:functional-032172 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-032172 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-032172 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-032172 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-032172 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2445025: os: process already finished
helpers_test.go:502: unable to terminate pid 2444884: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.392163444s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-032172 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-032172 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bcdeb0b1-c223-48a3-9af2-644815008aeb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bcdeb0b1-c223-48a3-9af2-644815008aeb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004847243s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-032172
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 image save --daemon gcr.io/google-containers/addon-resizer:functional-032172 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-032172 image save --daemon gcr.io/google-containers/addon-resizer:functional-032172 --alsologtostderr: (1.536566039s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-032172
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-032172 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.182.25 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-032172 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "365.371748ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "74.017609ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "374.035652ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "78.139531ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdany-port649405064/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705378616340889010" to /tmp/TestFunctionalparallelMountCmdany-port649405064/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705378616340889010" to /tmp/TestFunctionalparallelMountCmdany-port649405064/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705378616340889010" to /tmp/TestFunctionalparallelMountCmdany-port649405064/001/test-1705378616340889010
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (399.999923ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 16 04:16 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 16 04:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 16 04:16 test-1705378616340889010
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh cat /mount-9p/test-1705378616340889010
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-032172 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [34d9597b-b3dd-4216-83d5-e3dc86e5752d] Pending
helpers_test.go:344: "busybox-mount" [34d9597b-b3dd-4216-83d5-e3dc86e5752d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [34d9597b-b3dd-4216-83d5-e3dc86e5752d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [34d9597b-b3dd-4216-83d5-e3dc86e5752d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00624853s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-032172 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdany-port649405064/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdspecific-port1307295003/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (499.139561ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdspecific-port1307295003/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 ssh "sudo umount -f /mount-9p": exit status 1 (472.307331ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-032172 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdspecific-port1307295003/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109335110/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109335110/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109335110/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T" /mount1: exit status 1 (1.430616441s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-032172 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-032172 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109335110/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109335110/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-032172 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2109335110/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.71s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-032172
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-032172
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-032172
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (91.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-865845 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0116 04:18:46.727113 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-865845 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m31.818494249s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (91.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-865845 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-865845 addons enable ingress --alsologtostderr -v=5: (11.502395106s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.71s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-865845 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.71s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-504073 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0116 04:22:39.464855 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-504073 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (51.207223997s)
--- PASS: TestJSONOutput/start/Command (51.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-504073 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-504073 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-504073 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-504073 --output=json --user=testUser: (5.830290615s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-589617 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-589617 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.877221ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd93ac2f-1956-4d98-b511-cf1d95cf6f3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-589617] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"56ae6833-bb79-49ed-9e77-c363eb676389","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17965"}}
	{"specversion":"1.0","id":"31963bcc-aedf-4d12-bde6-a31786399815","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dfbc4d1a-b771-4377-89a5-230b677f1f13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig"}}
	{"specversion":"1.0","id":"053e8e39-7cd6-4a53-8c71-8a01841d0a57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube"}}
	{"specversion":"1.0","id":"d4eb9fe6-30a3-455a-906e-8f1214f2f781","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9d667e2e-75b7-4efa-954a-a33c62445c33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fa7fc5fe-20de-4db5-a8d8-cc5e3b98310d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-589617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-589617
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (49.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-440475 --network=
E0116 04:23:46.727195 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-440475 --network=: (47.130022869s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-440475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-440475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-440475: (2.045187835s)
--- PASS: TestKicCustomNetwork/create_custom_network (49.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-410930 --network=bridge
E0116 04:24:01.385437 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:24:03.661792 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:03.667070 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:03.677302 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:03.697598 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:03.737769 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:03.818080 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:03.978903 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:04.299479 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:04.940254 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:06.220454 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:08.780888 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:13.901106 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:24:24.142143 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-410930 --network=bridge: (32.64438481s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-410930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-410930
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-410930: (1.957146779s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.63s)

                                                
                                    
x
+
TestKicExistingNetwork (34.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-052855 --network=existing-network
E0116 04:24:44.622391 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-052855 --network=existing-network: (31.974053621s)
helpers_test.go:175: Cleaning up "existing-network-052855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-052855
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-052855: (1.986330711s)
--- PASS: TestKicExistingNetwork (34.15s)

                                                
                                    
x
+
TestKicCustomSubnet (33.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-542257 --subnet=192.168.60.0/24
E0116 04:25:25.583819 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-542257 --subnet=192.168.60.0/24: (31.46677175s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-542257 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-542257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-542257
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-542257: (2.155915387s)
--- PASS: TestKicCustomSubnet (33.64s)

                                                
                                    
x
+
TestKicStaticIP (31.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-711729 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-711729 --static-ip=192.168.200.200: (29.478537373s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-711729 ip
helpers_test.go:175: Cleaning up "static-ip-711729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-711729
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-711729: (2.177464806s)
--- PASS: TestKicStaticIP (31.83s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (68.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-351044 --driver=docker  --container-runtime=crio
E0116 04:26:17.543215 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-351044 --driver=docker  --container-runtime=crio: (31.922453899s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-353579 --driver=docker  --container-runtime=crio
E0116 04:26:45.225902 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:26:47.504352 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-353579 --driver=docker  --container-runtime=crio: (31.696629296s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-351044
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-353579
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-353579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-353579
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-353579: (1.999420184s)
helpers_test.go:175: Cleaning up "first-351044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-351044
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-351044: (1.978797996s)
--- PASS: TestMinikubeProfile (68.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-597136 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-597136 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.885957605s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-597136 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-598941 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-598941 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.611897064s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-598941 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-597136 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-597136 --alsologtostderr -v=5: (1.671335514s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-598941 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-598941
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-598941: (1.223474849s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-598941
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-598941: (6.93846808s)
--- PASS: TestMountStart/serial/RestartStopped (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-598941 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-701570 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0116 04:28:46.727765 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:29:03.661372 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-701570 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.170546668s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-701570 -- rollout status deployment/busybox: (2.62483219s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-v42wl -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-x6w9z -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-v42wl -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-x6w9z -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-v42wl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-701570 -- exec busybox-5bc68d56bd-x6w9z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-701570 -v 3 --alsologtostderr
E0116 04:30:09.776929 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-701570 -v 3 --alsologtostderr: (49.713510368s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.44s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-701570 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp testdata/cp-test.txt multinode-701570:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp multinode-701570:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3922223922/001/cp-test_multinode-701570.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp multinode-701570:/home/docker/cp-test.txt multinode-701570-m02:/home/docker/cp-test_multinode-701570_multinode-701570-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m02 "sudo cat /home/docker/cp-test_multinode-701570_multinode-701570-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp multinode-701570:/home/docker/cp-test.txt multinode-701570-m03:/home/docker/cp-test_multinode-701570_multinode-701570-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m03 "sudo cat /home/docker/cp-test_multinode-701570_multinode-701570-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp testdata/cp-test.txt multinode-701570-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp multinode-701570-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3922223922/001/cp-test_multinode-701570-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp multinode-701570-m02:/home/docker/cp-test.txt multinode-701570:/home/docker/cp-test_multinode-701570-m02_multinode-701570.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570 "sudo cat /home/docker/cp-test_multinode-701570-m02_multinode-701570.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp multinode-701570-m02:/home/docker/cp-test.txt multinode-701570-m03:/home/docker/cp-test_multinode-701570-m02_multinode-701570-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m03 "sudo cat /home/docker/cp-test_multinode-701570-m02_multinode-701570-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp testdata/cp-test.txt multinode-701570-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp multinode-701570-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3922223922/001/cp-test_multinode-701570-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp multinode-701570-m03:/home/docker/cp-test.txt multinode-701570:/home/docker/cp-test_multinode-701570-m03_multinode-701570.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570 "sudo cat /home/docker/cp-test_multinode-701570-m03_multinode-701570.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 cp multinode-701570-m03:/home/docker/cp-test.txt multinode-701570-m02:/home/docker/cp-test_multinode-701570-m03_multinode-701570-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 ssh -n multinode-701570-m02 "sudo cat /home/docker/cp-test_multinode-701570-m03_multinode-701570-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-701570 node stop m03: (1.246852201s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-701570 status: exit status 7 (567.690895ms)

                                                
                                                
-- stdout --
	multinode-701570
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-701570-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-701570-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-701570 status --alsologtostderr: exit status 7 (581.878071ms)

                                                
                                                
-- stdout --
	multinode-701570
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-701570-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-701570-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 04:30:35.770615 2494465 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:30:35.770885 2494465 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:30:35.770945 2494465 out.go:309] Setting ErrFile to fd 2...
	I0116 04:30:35.770966 2494465 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:30:35.771279 2494465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:30:35.771492 2494465 out.go:303] Setting JSON to false
	I0116 04:30:35.771609 2494465 mustload.go:65] Loading cluster: multinode-701570
	I0116 04:30:35.771701 2494465 notify.go:220] Checking for updates...
	I0116 04:30:35.772995 2494465 config.go:182] Loaded profile config "multinode-701570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:30:35.773105 2494465 status.go:255] checking status of multinode-701570 ...
	I0116 04:30:35.773771 2494465 cli_runner.go:164] Run: docker container inspect multinode-701570 --format={{.State.Status}}
	I0116 04:30:35.797033 2494465 status.go:330] multinode-701570 host status = "Running" (err=<nil>)
	I0116 04:30:35.797068 2494465 host.go:66] Checking if "multinode-701570" exists ...
	I0116 04:30:35.797442 2494465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701570
	I0116 04:30:35.816552 2494465 host.go:66] Checking if "multinode-701570" exists ...
	I0116 04:30:35.816976 2494465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 04:30:35.817030 2494465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570
	I0116 04:30:35.847117 2494465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35391 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570/id_rsa Username:docker}
	I0116 04:30:35.943120 2494465 ssh_runner.go:195] Run: systemctl --version
	I0116 04:30:35.948623 2494465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:30:35.961981 2494465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:30:36.038168 2494465 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-16 04:30:36.027179316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:30:36.038832 2494465 kubeconfig.go:92] found "multinode-701570" server: "https://192.168.58.2:8443"
	I0116 04:30:36.038860 2494465 api_server.go:166] Checking apiserver status ...
	I0116 04:30:36.038913 2494465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 04:30:36.052808 2494465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1240/cgroup
	I0116 04:30:36.064935 2494465 api_server.go:182] apiserver freezer: "3:freezer:/docker/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0/crio/crio-00c13145cb3f35a92eff65339c93977b6b72aed27621358591b1182e7ad4f7f3"
	I0116 04:30:36.065014 2494465 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/28e792c4e9c30d33bd257e8246d0d4bffbcaeaf8e6ab5fe81d7d83b6cf928fc0/crio/crio-00c13145cb3f35a92eff65339c93977b6b72aed27621358591b1182e7ad4f7f3/freezer.state
	I0116 04:30:36.075963 2494465 api_server.go:204] freezer state: "THAWED"
	I0116 04:30:36.075995 2494465 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0116 04:30:36.085086 2494465 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0116 04:30:36.085120 2494465 status.go:421] multinode-701570 apiserver status = Running (err=<nil>)
	I0116 04:30:36.085157 2494465 status.go:257] multinode-701570 status: &{Name:multinode-701570 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 04:30:36.085180 2494465 status.go:255] checking status of multinode-701570-m02 ...
	I0116 04:30:36.085504 2494465 cli_runner.go:164] Run: docker container inspect multinode-701570-m02 --format={{.State.Status}}
	I0116 04:30:36.103247 2494465 status.go:330] multinode-701570-m02 host status = "Running" (err=<nil>)
	I0116 04:30:36.103275 2494465 host.go:66] Checking if "multinode-701570-m02" exists ...
	I0116 04:30:36.103582 2494465 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701570-m02
	I0116 04:30:36.130883 2494465 host.go:66] Checking if "multinode-701570-m02" exists ...
	I0116 04:30:36.131193 2494465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 04:30:36.131243 2494465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701570-m02
	I0116 04:30:36.149161 2494465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35396 SSHKeyPath:/home/jenkins/minikube-integration/17965-2415678/.minikube/machines/multinode-701570-m02/id_rsa Username:docker}
	I0116 04:30:36.243334 2494465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 04:30:36.257763 2494465 status.go:257] multinode-701570-m02 status: &{Name:multinode-701570-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 04:30:36.257796 2494465 status.go:255] checking status of multinode-701570-m03 ...
	I0116 04:30:36.258148 2494465 cli_runner.go:164] Run: docker container inspect multinode-701570-m03 --format={{.State.Status}}
	I0116 04:30:36.275254 2494465 status.go:330] multinode-701570-m03 host status = "Stopped" (err=<nil>)
	I0116 04:30:36.275284 2494465 status.go:343] host is not running, skipping remaining checks
	I0116 04:30:36.275291 2494465 status.go:257] multinode-701570-m03 status: &{Name:multinode-701570-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-701570 node start m03 --alsologtostderr: (12.508031998s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-701570
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-701570
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-701570: (24.921763284s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-701570 --wait=true -v=8 --alsologtostderr
E0116 04:31:17.543609 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-701570 --wait=true -v=8 --alsologtostderr: (1m37.228050695s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-701570
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-701570 node delete m03: (4.423341009s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-701570 stop: (23.828129708s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-701570 status: exit status 7 (112.292157ms)

                                                
                                                
-- stdout --
	multinode-701570
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-701570-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-701570 status --alsologtostderr: exit status 7 (106.062809ms)

                                                
                                                
-- stdout --
	multinode-701570
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-701570-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 04:33:21.184721 2502622 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:33:21.184890 2502622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:33:21.184900 2502622 out.go:309] Setting ErrFile to fd 2...
	I0116 04:33:21.184906 2502622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:33:21.185199 2502622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:33:21.185384 2502622 out.go:303] Setting JSON to false
	I0116 04:33:21.185485 2502622 mustload.go:65] Loading cluster: multinode-701570
	I0116 04:33:21.185567 2502622 notify.go:220] Checking for updates...
	I0116 04:33:21.185917 2502622 config.go:182] Loaded profile config "multinode-701570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0116 04:33:21.185929 2502622 status.go:255] checking status of multinode-701570 ...
	I0116 04:33:21.186736 2502622 cli_runner.go:164] Run: docker container inspect multinode-701570 --format={{.State.Status}}
	I0116 04:33:21.204442 2502622 status.go:330] multinode-701570 host status = "Stopped" (err=<nil>)
	I0116 04:33:21.204465 2502622 status.go:343] host is not running, skipping remaining checks
	I0116 04:33:21.204487 2502622 status.go:257] multinode-701570 status: &{Name:multinode-701570 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 04:33:21.204514 2502622 status.go:255] checking status of multinode-701570-m02 ...
	I0116 04:33:21.204993 2502622 cli_runner.go:164] Run: docker container inspect multinode-701570-m02 --format={{.State.Status}}
	I0116 04:33:21.222290 2502622 status.go:330] multinode-701570-m02 host status = "Stopped" (err=<nil>)
	I0116 04:33:21.222311 2502622 status.go:343] host is not running, skipping remaining checks
	I0116 04:33:21.222319 2502622 status.go:257] multinode-701570-m02 status: &{Name:multinode-701570-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-701570 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0116 04:33:46.727624 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:34:03.661607 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-701570 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m18.62713049s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-701570 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-701570
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-701570-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-701570-m02 --driver=docker  --container-runtime=crio: exit status 14 (117.907916ms)

                                                
                                                
-- stdout --
	* [multinode-701570-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-701570-m02' is duplicated with machine name 'multinode-701570-m02' in profile 'multinode-701570'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-701570-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-701570-m03 --driver=docker  --container-runtime=crio: (31.6322538s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-701570
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-701570: exit status 80 (357.581611ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-701570
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-701570-m03 already exists in multinode-701570-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-701570-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-701570-m03: (2.003121529s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.18s)

                                                
                                    
x
+
TestPreload (170.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-058559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0116 04:36:17.543667 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-058559 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m22.870643843s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-058559 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-058559 image pull gcr.io/k8s-minikube/busybox: (2.126996416s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-058559
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-058559: (5.811915199s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-058559 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0116 04:37:40.586166 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-058559 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m17.215023906s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-058559 image list
helpers_test.go:175: Cleaning up "test-preload-058559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-058559
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-058559: (2.399016202s)
--- PASS: TestPreload (170.72s)

                                                
                                    
x
+
TestScheduledStopUnix (107.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-065202 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-065202 --memory=2048 --driver=docker  --container-runtime=crio: (29.896859907s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-065202 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-065202 -n scheduled-stop-065202
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-065202 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-065202 --cancel-scheduled
E0116 04:38:46.727850 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:39:03.661374 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-065202 -n scheduled-stop-065202
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-065202
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-065202 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-065202
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-065202: exit status 7 (97.666518ms)

                                                
                                                
-- stdout --
	scheduled-stop-065202
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-065202 -n scheduled-stop-065202
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-065202 -n scheduled-stop-065202: exit status 7 (86.514866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-065202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-065202
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-065202: (5.483827715s)
--- PASS: TestScheduledStopUnix (107.36s)

                                                
                                    
x
+
TestInsufficientStorage (11.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-687466 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-687466 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.433552999s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d13b8426-cdad-4521-9597-40ce21ba0e0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-687466] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7716c688-6501-49a8-afc4-f44167cc9724","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17965"}}
	{"specversion":"1.0","id":"ea977fdd-a576-4c8f-9032-065aa1e07b75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c5e4e790-e172-400d-bebd-9c932cb2e105","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig"}}
	{"specversion":"1.0","id":"59f7523c-c60e-406f-ba36-6a745f774c90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube"}}
	{"specversion":"1.0","id":"904b25da-ff74-4b62-8526-7543e4201ef6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"eb2bc3dd-01ff-468f-9403-522540714518","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"00445053-f04c-41f9-ae54-76e8054d7bbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3ee0b82a-efd3-4335-a7dc-f9405639d543","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"aa048df7-9186-44c2-bbf9-a23e957987bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4961a22-58ec-4f09-9675-1f601a913572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a0833817-832a-47ea-90f5-2b1f2786c029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-687466 in cluster insufficient-storage-687466","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"13df17e9-9ac9-405f-a96a-eb7c113937b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"811d8e8d-0284-40ee-aba8-8abc143dfc09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd9a5a1c-40c3-4a81-845c-9e2edaed7da7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-687466 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-687466 --output=json --layout=cluster: exit status 7 (351.663041ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-687466","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-687466","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 04:40:08.204140 2519391 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-687466" does not appear in /home/jenkins/minikube-integration/17965-2415678/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-687466 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-687466 --output=json --layout=cluster: exit status 7 (323.992679ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-687466","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-687466","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0116 04:40:08.528615 2519444 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-687466" does not appear in /home/jenkins/minikube-integration/17965-2415678/kubeconfig
	E0116 04:40:08.541567 2519444 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/insufficient-storage-687466/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-687466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-687466
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-687466: (2.001506825s)
--- PASS: TestInsufficientStorage (11.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1407064961 start -p running-upgrade-046530 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1407064961 start -p running-upgrade-046530 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.729263512s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-046530 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-046530 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.371613939s)
helpers_test.go:175: Cleaning up "running-upgrade-046530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-046530
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-046530: (2.902963103s)
--- PASS: TestRunningBinaryUpgrade (79.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (420.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-793693 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-793693 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m13.099894223s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-793693
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-793693: (1.392628787s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-793693 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-793693 status --format={{.Host}}: exit status 7 (125.612768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-793693 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-793693 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m51.043942099s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-793693 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-793693 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-793693 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (148.500903ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-793693] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-793693
	    minikube start -p kubernetes-upgrade-793693 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7936932 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-793693 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-793693 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-793693 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.444698916s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-793693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-793693
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-793693: (2.380369485s)
--- PASS: TestKubernetesUpgrade (420.74s)

                                                
                                    
x
+
TestMissingContainerUpgrade (159.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3404029104 start -p missing-upgrade-358441 --memory=2200 --driver=docker  --container-runtime=crio
E0116 04:40:26.705672 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3404029104 start -p missing-upgrade-358441 --memory=2200 --driver=docker  --container-runtime=crio: (1m14.834345599s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-358441
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-358441: (10.492548838s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-358441
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-358441 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-358441 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.061617703s)
helpers_test.go:175: Cleaning up "missing-upgrade-358441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-358441
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-358441: (2.215949672s)
--- PASS: TestMissingContainerUpgrade (159.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-716356 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-716356 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (92.306531ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-716356] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-716356 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-716356 --driver=docker  --container-runtime=crio: (44.2431514s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-716356 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-716356 --no-kubernetes --driver=docker  --container-runtime=crio
E0116 04:41:17.543778 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-716356 --no-kubernetes --driver=docker  --container-runtime=crio: (26.323609831s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-716356 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-716356 status -o json: exit status 2 (412.96506ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-716356","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-716356
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-716356: (2.475266992s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-716356 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-716356 --no-kubernetes --driver=docker  --container-runtime=crio: (9.738343461s)
--- PASS: TestNoKubernetes/serial/Start (9.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-716356 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-716356 "sudo systemctl is-active --quiet service kubelet": exit status 1 (303.015816ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (3.300522589s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-716356
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-716356: (1.230734643s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-716356 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-716356 --driver=docker  --container-runtime=crio: (7.85515397s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-716356 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-716356 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.388668ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (74.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2874112553 start -p stopped-upgrade-300003 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2874112553 start -p stopped-upgrade-300003 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.806305157s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2874112553 -p stopped-upgrade-300003 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2874112553 -p stopped-upgrade-300003 stop: (3.372969091s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-300003 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0116 04:43:46.727056 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:44:03.661766 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-300003 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.249591301s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (74.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-300003
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-300003: (1.102787377s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestPause/serial/Start (54.28s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-093700 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0116 04:46:17.543235 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-093700 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.276805297s)
--- PASS: TestPause/serial/Start (54.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-093700 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0116 04:46:49.777759 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-093700 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.644942272s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.67s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-093700 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-093700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-093700 --output=json --layout=cluster: exit status 2 (399.257064ms)

                                                
                                                
-- stdout --
	{"Name":"pause-093700","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-093700","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-093700 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.99s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.06s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-093700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-093700 --alsologtostderr -v=5: (1.055186826s)
--- PASS: TestPause/serial/PauseAgain (1.06s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-093700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-093700 --alsologtostderr -v=5: (3.188818354s)
--- PASS: TestPause/serial/DeletePaused (3.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-093700
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-093700: exit status 1 (18.948431ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-093700: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-965383 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-965383 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (384.007361ms)

                                                
                                                
-- stdout --
	* [false-965383] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17965
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0116 04:47:44.840280 2555584 out.go:296] Setting OutFile to fd 1 ...
	I0116 04:47:44.840431 2555584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:47:44.840441 2555584 out.go:309] Setting ErrFile to fd 2...
	I0116 04:47:44.840447 2555584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 04:47:44.840693 2555584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17965-2415678/.minikube/bin
	I0116 04:47:44.841145 2555584 out.go:303] Setting JSON to false
	I0116 04:47:44.842071 2555584 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":41396,"bootTime":1705339069,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0116 04:47:44.842146 2555584 start.go:138] virtualization:  
	I0116 04:47:44.844967 2555584 out.go:177] * [false-965383] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0116 04:47:44.846875 2555584 out.go:177]   - MINIKUBE_LOCATION=17965
	I0116 04:47:44.848820 2555584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 04:47:44.847010 2555584 notify.go:220] Checking for updates...
	I0116 04:47:44.852677 2555584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17965-2415678/kubeconfig
	I0116 04:47:44.854713 2555584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17965-2415678/.minikube
	I0116 04:47:44.856694 2555584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0116 04:47:44.858749 2555584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 04:47:44.861375 2555584 config.go:182] Loaded profile config "kubernetes-upgrade-793693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0116 04:47:44.861480 2555584 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 04:47:44.918321 2555584 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0116 04:47:44.918449 2555584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0116 04:47:45.073830 2555584 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-16 04:47:45.056372191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0116 04:47:45.073941 2555584 docker.go:295] overlay module found
	I0116 04:47:45.075979 2555584 out.go:177] * Using the docker driver based on user configuration
	I0116 04:47:45.078220 2555584 start.go:298] selected driver: docker
	I0116 04:47:45.078240 2555584 start.go:902] validating driver "docker" against <nil>
	I0116 04:47:45.078253 2555584 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 04:47:45.080805 2555584 out.go:177] 
	W0116 04:47:45.083066 2555584 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0116 04:47:45.085009 2555584 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-965383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-965383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 04:43:19 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-793693
contexts:
- context:
cluster: kubernetes-upgrade-793693
user: kubernetes-upgrade-793693
name: kubernetes-upgrade-793693
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-793693
user:
client-certificate: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kubernetes-upgrade-793693/client.crt
client-key: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kubernetes-upgrade-793693/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-965383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965383"

                                                
                                                
----------------------- debugLogs end: false-965383 [took: 5.759689875s] --------------------------------
helpers_test.go:175: Cleaning up "false-965383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-965383
--- PASS: TestNetworkPlugins/group/false (6.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (133.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-940621 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0116 04:51:17.542780 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-940621 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m13.45462379s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (133.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-940621 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [890c2586-d0fa-46fb-8b18-f531734bf1b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [890c2586-d0fa-46fb-8b18-f531734bf1b9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003759129s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-940621 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-940621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-940621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017678885s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-940621 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-940621 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-940621 --alsologtostderr -v=3: (12.032787806s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-940621 -n old-k8s-version-940621
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-940621 -n old-k8s-version-940621: exit status 7 (102.569618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-940621 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (427.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-940621 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-940621 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m6.733604771s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-940621 -n old-k8s-version-940621
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (427.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-211896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0116 04:53:46.727274 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-211896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m8.024870338s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-211896 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [95ec3561-996a-465b-8d49-a78d3e636366] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0116 04:54:03.660892 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
helpers_test.go:344: "busybox" [95ec3561-996a-465b-8d49-a78d3e636366] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004616097s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-211896 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-211896 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-211896 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.024385637s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-211896 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-211896 --alsologtostderr -v=3
E0116 04:54:20.586586 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-211896 --alsologtostderr -v=3: (12.046228687s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-211896 -n no-preload-211896
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-211896 -n no-preload-211896: exit status 7 (92.681888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-211896 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (630.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-211896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0116 04:56:17.543247 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 04:57:06.706485 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 04:58:46.726975 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 04:59:03.661297 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-211896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m29.621299071s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-211896 -n no-preload-211896
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (630.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-dbmlg" [ddfe5ff8-49b1-413a-8a3f-15bb50800130] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003799095s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-dbmlg" [ddfe5ff8-49b1-413a-8a3f-15bb50800130] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002853933s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-940621 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-940621 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-940621 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-940621 --alsologtostderr -v=1: (1.13966143s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-940621 -n old-k8s-version-940621
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-940621 -n old-k8s-version-940621: exit status 2 (482.419319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-940621 -n old-k8s-version-940621
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-940621 -n old-k8s-version-940621: exit status 2 (488.609579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-940621 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-940621 --alsologtostderr -v=1: (1.041566882s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-940621 -n old-k8s-version-940621
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-940621 -n old-k8s-version-940621
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-872773 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-872773 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m23.13819855s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-872773 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e090e773-8f37-47be-b00d-51c04e38a019] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e090e773-8f37-47be-b00d-51c04e38a019] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00450565s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-872773 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-872773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-872773 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.142038704s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-872773 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-872773 --alsologtostderr -v=3
E0116 05:01:17.543537 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-872773 --alsologtostderr -v=3: (12.059612581s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-872773 -n embed-certs-872773
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-872773 -n embed-certs-872773: exit status 7 (98.637851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-872773 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (601.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-872773 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 05:01:43.664118 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:43.669308 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:43.679592 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:43.699828 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:43.740022 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:43.820254 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:43.980474 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:44.300828 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:44.941472 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:46.222547 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:48.782837 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:01:53.903865 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:02:04.144341 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:02:24.625209 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:03:05.585853 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:03:29.778965 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 05:03:46.727060 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 05:04:03.661796 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 05:04:27.506238 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-872773 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m1.549844142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-872773 -n embed-certs-872773
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (601.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9c9q8" [e9ed88fa-fafb-4094-83c3-474446678a7b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003521604s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9c9q8" [e9ed88fa-fafb-4094-83c3-474446678a7b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004241145s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-211896 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-211896 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-211896 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-211896 -n no-preload-211896
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-211896 -n no-preload-211896: exit status 2 (407.48713ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-211896 -n no-preload-211896
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-211896 -n no-preload-211896: exit status 2 (371.6269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-211896 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-211896 -n no-preload-211896
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-211896 -n no-preload-211896
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-587668 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 05:06:17.543536 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-587668 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m21.298683588s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-587668 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8d8b9a02-18c2-4491-93ca-ab1f13ac4058] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8d8b9a02-18c2-4491-93ca-ab1f13ac4058] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00430497s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-587668 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-587668 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-587668 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.394706724s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-587668 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-587668 --alsologtostderr -v=3
E0116 05:06:43.663705 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-587668 --alsologtostderr -v=3: (12.376658864s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-587668 -n default-k8s-diff-port-587668
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-587668 -n default-k8s-diff-port-587668: exit status 7 (89.419251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-587668 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (627.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-587668 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E0116 05:07:11.346391 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:08:46.727255 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 05:09:01.049979 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:01.055293 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:01.065585 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:01.085941 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:01.126676 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:01.207124 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:01.368154 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:01.688718 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:02.329123 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:03.609306 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:03.661548 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 05:09:06.170169 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:11.290532 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:21.531498 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:09:42.012812 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:10:22.973192 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:11:00.587017 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
E0116 05:11:17.542939 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/functional-032172/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-587668 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m26.600040161s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-587668 -n default-k8s-diff-port-587668
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (627.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rd7pd" [8b710783-8ccc-48f8-9db7-0e54ee1b25b8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003563284s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rd7pd" [8b710783-8ccc-48f8-9db7-0e54ee1b25b8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004131442s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-872773 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-872773 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-872773 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-872773 -n embed-certs-872773
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-872773 -n embed-certs-872773: exit status 2 (381.910088ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-872773 -n embed-certs-872773
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-872773 -n embed-certs-872773: exit status 2 (368.287204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-872773 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-872773 -n embed-certs-872773
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-872773 -n embed-certs-872773
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-738252 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0116 05:11:43.663223 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:11:44.893981 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-738252 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (51.453057114s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-738252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-738252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.183064901s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-738252 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-738252 --alsologtostderr -v=3: (1.258025917s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-738252 -n newest-cni-738252
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-738252 -n newest-cni-738252: exit status 7 (100.037493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-738252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-738252 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-738252 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (31.319969857s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-738252 -n newest-cni-738252
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-738252 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-738252 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-738252 -n newest-cni-738252
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-738252 -n newest-cni-738252: exit status 2 (385.546673ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-738252 -n newest-cni-738252
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-738252 -n newest-cni-738252: exit status 2 (374.686845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-738252 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-738252 -n newest-cni-738252
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-738252 -n newest-cni-738252
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0116 05:13:46.707217 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 05:13:46.727610 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
E0116 05:14:01.050749 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:14:03.661787 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
E0116 05:14:28.734595 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m18.236848037s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-965383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-965383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hjbjv" [1a310d33-e143-4196-9ec0-c255dc2ecf2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hjbjv" [1a310d33-e143-4196-9ec0-c255dc2ecf2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003639079s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-965383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (50.572929064s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dctv6" [f26b9c58-0e61-49bd-9c2d-15521a301935] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004844057s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-965383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-965383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2vhm7" [0e91f5d7-f915-4a1b-8bc0-9ca5b3c7859c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2vhm7" [0e91f5d7-f915-4a1b-8bc0-9ca5b3c7859c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004221665s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-965383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0116 05:16:43.663830 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m19.096070691s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-b446q" [90380cc5-113a-492c-b958-4fa7eff75ab5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005373091s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-b446q" [90380cc5-113a-492c-b958-4fa7eff75ab5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017319969s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-587668 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-587668 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-587668 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-587668 --alsologtostderr -v=1: (1.313581648s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-587668 -n default-k8s-diff-port-587668
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-587668 -n default-k8s-diff-port-587668: exit status 2 (518.296256ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-587668 -n default-k8s-diff-port-587668
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-587668 -n default-k8s-diff-port-587668: exit status 2 (523.94449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-587668 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-587668 --alsologtostderr -v=1: (1.376800536s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-587668 -n default-k8s-diff-port-587668
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-587668 -n default-k8s-diff-port-587668
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.38s)
E0116 05:21:32.535355 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:32.540644 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:32.550955 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:32.571261 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:32.611494 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:32.691762 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:32.852190 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:33.173206 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:33.813347 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:35.094267 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:36.933271 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:21:37.654891 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:42.775843 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:21:43.664108 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
E0116 05:21:53.016170 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:22:13.496865 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/default-k8s-diff-port-587668/client.crt: no such file or directory
E0116 05:22:14.271605 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/auto-965383/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.546962542s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-r2qpb" [afa0ece8-c71a-4b73-bdbb-f96876e8aa34] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00593412s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-965383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-965383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s48cj" [2dc6d4ba-d6cf-4e95-91c8-7ef1483d8e81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 05:18:06.707103 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/old-k8s-version-940621/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-s48cj" [2dc6d4ba-d6cf-4e95-91c8-7ef1483d8e81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006325618s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-965383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0116 05:18:46.726807 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m29.127039207s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-965383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-965383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5f7fw" [7c0f6ab1-0bd4-47a8-b455-ea9293a3e076] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5f7fw" [7c0f6ab1-0bd4-47a8-b455-ea9293a3e076] Running
E0116 05:19:01.051764 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/no-preload-211896/client.crt: no such file or directory
E0116 05:19:03.661737 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/ingress-addon-legacy-865845/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004222155s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-965383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0116 05:19:35.548829 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/auto-965383/client.crt: no such file or directory
E0116 05:19:40.669403 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/auto-965383/client.crt: no such file or directory
E0116 05:19:50.910344 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/auto-965383/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.568529707s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-965383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-965383 replace --force -f testdata/netcat-deployment.yaml
E0116 05:20:09.779172 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/addons-775662/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-znz9r" [a74cec20-51f5-499e-9b6c-973122c687e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 05:20:11.390800 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/auto-965383/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-znz9r" [a74cec20-51f5-499e-9b6c-973122c687e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003713895s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-965383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nrbf2" [262baf9b-6d25-42ac-ba78-9c6bff6968a6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004151076s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-965383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m33.503503453s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-965383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-965383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4qbrt" [1db1b2ac-95d6-4d65-997e-0dfa13b0726e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0116 05:20:52.351392 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/auto-965383/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-4qbrt" [1db1b2ac-95d6-4d65-997e-0dfa13b0726e] Running
E0116 05:20:55.967210 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:20:55.972546 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:20:55.983455 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:20:56.004569 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:20:56.044824 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:20:56.125513 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:20:56.286652 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:20:56.607386 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:20:57.247754 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
E0116 05:20:58.528833 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004369898s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-965383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-965383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-965383 replace --force -f testdata/netcat-deployment.yaml
E0116 05:22:17.894215 2421005 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kindnet-965383/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p6qft" [df77cac6-3555-4a53-8327-87132890910f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-p6qft" [df77cac6-3555-4a53-8327-87132890910f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004546485s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-965383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-965383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (32/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.68s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-565460 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-565460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-565460
--- SKIP: TestDownloadOnlyKic (0.68s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-812386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-812386
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-965383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-965383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 04:43:19 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-793693
contexts:
- context:
cluster: kubernetes-upgrade-793693
user: kubernetes-upgrade-793693
name: kubernetes-upgrade-793693
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-793693
user:
client-certificate: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kubernetes-upgrade-793693/client.crt
client-key: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kubernetes-upgrade-793693/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-965383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965383"

                                                
                                                
----------------------- debugLogs end: kubenet-965383 [took: 5.543378791s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-965383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-965383
--- SKIP: TestNetworkPlugins/group/kubenet (5.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-965383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-965383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17965-2415678/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Jan 2024 04:47:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: kubernetes-upgrade-793693
contexts:
- context:
cluster: kubernetes-upgrade-793693
extensions:
- extension:
last-update: Tue, 16 Jan 2024 04:47:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-793693
name: kubernetes-upgrade-793693
current-context: kubernetes-upgrade-793693
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-793693
user:
client-certificate: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kubernetes-upgrade-793693/client.crt
client-key: /home/jenkins/minikube-integration/17965-2415678/.minikube/profiles/kubernetes-upgrade-793693/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-965383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-965383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965383"

                                                
                                                
----------------------- debugLogs end: cilium-965383 [took: 6.086709916s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-965383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-965383
--- SKIP: TestNetworkPlugins/group/cilium (6.42s)

                                                
                                    
Copied to clipboard